Science.gov

Sample records for agent based classification

  1. A Library Book Intelligence Classification System based on Multi-agent

    NASA Astrophysics Data System (ADS)

    Pengfei, Guo; Liangxian, Du; Junxia, Qi

    This paper introduces the concept of artificial intelligence into the administrative system of the library, and then gives the model of robot system in book classification based on multi-agent. The intelligent robot can recognize books' barcode automatically and here gives the classification algorithm according to the book classification of Chinese library. The algorithm can calculate the concrete position of the books, and relate with all similar books, thus the robot can put all congener books once without turning back.

  2. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  3. Mass classification in mammography with multi-agent based fusion of human and machine intelligence

    NASA Astrophysics Data System (ADS)

    Xi, Dongdong; Fan, Ming; Li, Lihua; Zhang, Juan; Shan, Yanna; Dai, Gang; Zheng, Bin

    2016-03-01

    Although the computer-aided diagnosis (CAD) system can be applied for classifying the breast masses, the effects of this method on improvement of the radiologist' accuracy for distinguishing malignant from benign lesions still remain unclear. This study provided a novel method to classify breast masses by integrating the intelligence of human and machine. In this research, 224 breast masses were selected in mammography from database of DDSM with Breast Imaging Reporting and Data System (BI-RADS) categories. Three observers (a senior and a junior radiologist, as well as a radiology resident) were employed to independently read and classify these masses utilizing the Positive Predictive Values (PPV) for each BI-RADS category. Meanwhile, a CAD system was also implemented for classification of these breast masses between malignant and benign. To combine the decisions from the radiologists and CAD, the fusion method of the Multi-Agent was provided. Significant improvements are observed for the fusion system over solely radiologist or CAD. The area under the receiver operating characteristic curve (AUC) of the fusion system increased by 9.6%, 10.3% and 21% compared to that of radiologists with senior, junior and resident level, respectively. In addition, the AUC of this method based on the fusion of each radiologist and CAD are 3.5%, 3.6% and 3.3% higher than that of CAD alone. Finally, the fusion of the three radiologists with CAD achieved AUC value of 0.957, which was 5.6% larger compared to CAD. Our results indicated that the proposed fusion method has better performance than radiologist or CAD alone.

  4. Multi-Agent Information Classification Using Dynamic Acquaintance Lists.

    ERIC Educational Resources Information Center

    Mukhopadhyay, Snehasis; Peng, Shengquan; Raje, Rajeev; Palakal, Mathew; Mostafa, Javed

    2003-01-01

    Discussion of automated information services focuses on information classification and collaborative agents, i.e. intelligent computer programs. Highlights include multi-agent systems; distributed artificial intelligence; thesauri; document representation and classification; agent modeling; acquaintances, or remote agents discovered through…

  5. Multi-Agent Information Classification Using Dynamic Acquaintance Lists.

    ERIC Educational Resources Information Center

    Mukhopadhyay, Snehasis; Peng, Shengquan; Raje, Rajeev; Palakal, Mathew; Mostafa, Javed

    2003-01-01

    Discussion of automated information services focuses on information classification and collaborative agents, i.e. intelligent computer programs. Highlights include multi-agent systems; distributed artificial intelligence; thesauri; document representation and classification; agent modeling; acquaintances, or remote agents discovered through…

  6. PADMA: PArallel Data Mining Agents for scalable text classification

    SciTech Connect

    Kargupta, H.; Hamzaoglu, I.; Stafford, B.

    1997-03-01

    This paper introduces PADMA (PArallel Data Mining Agents), a parallel agent based system for scalable text classification. PADMA contains modules for (1) parallel data accessing operations, (2) parallel hierarchical clustering, and (3) web-based data visualization. This paper introduces the general architecture of PADMA and presents a detailed description of its different modules.

  7. Agent Collaborative Target Localization and Classification in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng

    2007-01-01

    Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.

  8. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  9. Granular loess classification based

    SciTech Connect

    Browzin, B.S.

    1985-05-01

    This paper discusses how loess might be identified by two index properties: the granulometric composition and the dry unit weight. These two indices are necessary but not always sufficient for identification of loess. On the basis of analyses of samples from three continents, it was concluded that the 0.01-0.5-mm fraction deserves the name loessial fraction. Based on the loessial fraction concept, a granulometric classification of loess is proposed. A triangular chart is used to classify loess.

  10. A new multi criteria classification approach in a multi agent system applied to SEEG analysis.

    PubMed

    Kinié, A; Ndiaye, M; Montois, J J; Jacquelet, Y

    2007-01-01

    This work is focused on the study of the organization of the SEEG signals during epileptic seizures with multi-agent system approach. This approach is based on cooperative mechanisms of auto-organization at the micro level and of emergence of a global function at the macro level. In order to evaluate this approach we propose a distributed collaborative approach for the classification of the interesting signals. This new multi-criteria classification method is able to provide a relevant brain area structures organisation and to bring out epileptogenic networks elements. The method is compared to another classification approach a fuzzy classification and gives better results when applied to SEEG signals.

  11. Multi-agent Negotiation Mechanisms for Statistical Target Classification in Wireless Multimedia Sensor Networks

    PubMed Central

    Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng

    2007-01-01

    The recent availability of low cost and miniaturized hardware has allowed wireless sensor networks (WSNs) to retrieve audio and video data in real world applications, which has fostered the development of wireless multimedia sensor networks (WMSNs). Resource constraints and challenging multimedia data volume make development of efficient algorithms to perform in-network processing of multimedia contents imperative. This paper proposes solving problems in the domain of WMSNs from the perspective of multi-agent systems. The multi-agent framework enables flexible network configuration and efficient collaborative in-network processing. The focus is placed on target classification in WMSNs where audio information is retrieved by microphones. To deal with the uncertainties related to audio information retrieval, the statistical approaches of power spectral density estimates, principal component analysis and Gaussian process classification are employed. A multi-agent negotiation mechanism is specially developed to efficiently utilize limited resources and simultaneously enhance classification accuracy and reliability. The negotiation is composed of two phases, where an auction based approach is first exploited to allocate the classification task among the agents and then individual agent decisions are combined by the committee decision mechanism. Simulation experiments with real world data are conducted and the results show that the proposed statistical approaches and negotiation mechanism not only reduce memory and computation requirements in WMSNs but also significantly enhance classification accuracy and reliability. PMID:28903223

  12. Classification-based reasoning

    NASA Technical Reports Server (NTRS)

    Gomez, Fernando; Segami, Carlos

    1991-01-01

    A representation formalism for N-ary relations, quantification, and definition of concepts is described. Three types of conditions are associated with the concepts: (1) necessary and sufficient properties, (2) contingent properties, and (3) necessary properties. Also explained is how complex chains of inferences can be accomplished by representing existentially quantified sentences, and concepts denoted by restrictive relative clauses as classification hierarchies. The representation structures that make possible the inferences are explained first, followed by the reasoning algorithms that draw the inferences from the knowledge structures. All the ideas explained have been implemented and are part of the information retrieval component of a program called Snowy. An appendix contains a brief session with the program.

  13. [General adverse reactions to contrast agents. Classification and general concepts].

    PubMed

    Aguilar García, J J; Parada Blázquez, M J; Vargas Serrano, B; Rodríguez Romero, R

    2014-06-01

    General adverse reactions to intravenous contrast agents are uncommon, although relevant due to the growing number of radiologic tests that use iodinated or gadolinium-based contrast agents. Although most of these reactions are mild, some patients can experience significant reactions that radiologists should know how to prevent and treat.

  14. A new multi criteria classification approach in a multi agent system applied to SEEG analysis

    PubMed Central

    Kinie, Abel; Ndiaye, Mamadou Lamine L.; Montois, Jean-Jacques; Jacquelet, Yann

    2007-01-01

    This work is focused on the study of the organization of the SEEG signals during epileptic seizures with multi-agent system approach. This approach is based on cooperative mechanisms of auto-organization at the micro level and of emergence of a global function at the macro level. In order to evaluate this approach we propose a distributed collaborative approach for the classification of the interesting signals. This new multi-criteria classification method is able to provide a relevant brain area structures organisation and to bring out epileptogenic networks elements. The method is compared to another classification approach a fuzzy classification and gives better results when applied to SEEG signals. PMID:18002381

  15. Agent-based forward analysis

    SciTech Connect

    Kerekes, Ryan A.; Jiao, Yu; Shankar, Mallikarjun; Potok, Thomas E.; Lusk, Rick M.

    2008-01-01

    We propose software agent-based "forward analysis" for efficient information retrieval in a network of sensing devices. In our approach, processing is pushed to the data at the edge of the network via intelligent software agents rather than pulling data to a central facility for processing. The agents are deployed with a specific query and perform varying levels of analysis of the data, communicating with each other and sending only relevant information back across the network. We demonstrate our concept in the context of face recognition using a wireless test bed comprised of PDA cell phones and laptops. We show that agent-based forward analysis can provide a significant increase in retrieval speed while decreasing bandwidth usage and information overload at the central facility. n

  16. Detection and classification of threat agents via high-content assays of mammalian cells.

    PubMed

    Tencza, Sarah B; Sipe, Michael A

    2004-01-01

    One property common to all chemical or biological threat agents is that they damage mammalian cells. A threat detection and classification method based on the effects of compounds on cells has been developed. This method employs high-content screening (HCS), a concept in drug discovery that enables those who practice cell-based assays to generate deeper biological information about the compounds they are testing. A commercial image-based cell screening platform comprising fluorescent reagents, automated image acquisition hardware, image analysis algorithms, data management and informatics was used to develop assays and detection/classification methods for threat agents. These assays measure a cell's response to a compound, which may include activation or inhibition of signal transduction pathways, morphological changes or cytotoxic effects. Data on cell responses to a library of compounds was collected and used as a training set. At the EILATox-Oregon Workshop, cellular responses following exposure to unknown samples were measured by conducting assays of p38 MAP kinase, NF-kappaB, extracellular-signal related kinase (ERK) MAP kinase, cyclic AMP-response element binding protein (CREB), cell permeability, lysosomal mass and nuclear morphology. Although the assays appeared to perform well, only four of the nine toxic samples were detected. However the system was specific, because no false positives were detected. Opportunities for improvement to the system were identified during the course of this enlightening workshop. Some of these improvements were applied in subsequent tests in the Cellomics laboratories, resulting in a higher level of detection. Thus, an HCS approach was shown to have potential in detecting threat agents, but additional work is necessary to make this a comprehensive detection and classification system.

  17. Standoff lidar simulation for biological warfare agent detection, tracking, and classification

    NASA Astrophysics Data System (ADS)

    Jönsson, Erika; Steinvall, Ove; Gustafsson, Ove; Kullander, Fredrik; Jonsson, Per

    2010-04-01

    Lidar has been identified as a promising sensor for remote detection of biological warfare agents (BWA). Elastic IR lidar can be used for cloud detection at long ranges and UV laser induced fluorescence can be used for discrimination of BWA against naturally occurring aerosols. This paper will describe a simulation tool which enables the simulation of lidar for detection, tracking and classification of aerosol clouds. The cloud model was available from another project and has been integrated into the model. It takes into account the type of aerosol, type of release (plume or puff), amounts of BWA, winds, height above the ground and terrain roughness. The model input includes laser and receiver parameters for both the IR and UV channels as well as the optical parameters of the background, cloud and atmosphere. The wind and cloud conditions and terrain roughness are specified for the cloud simulation. The search area including the angular sampling resolution together with the IR laser pulse repetition frequency defines the search conditions. After cloud detection in the elastic mode, the cloud can be tracked using appropriate algorithms. In the tracking mode the classification using fluorescence spectral emission is simulated and tested using correlation against known spectra. Other methods for classification based on elastic backscatter are also discussed as well as the determination of particle concentration. The simulation estimates and displays the lidar response, cloud concentration as well as the goodness of fit for the classification using fluorescence.

  18. Nonlinear knowledge-based classification.

    PubMed

    Mangasarian, Olvi L; Wild, Edward W

    2008-10-01

    In this brief, prior knowledge over general nonlinear sets is incorporated into nonlinear kernel classification problems as linear constraints in a linear program. These linear constraints are imposed at arbitrary points, not necessarily where the prior knowledge is given. The key tool in this incorporation is a theorem of the alternative for convex functions that converts nonlinear prior knowledge implications into linear inequalities without the need to kernelize these implications. Effectiveness of the proposed formulation is demonstrated on publicly available classification data sets, including a cancer prognosis data set. Nonlinear kernel classifiers for these data sets exhibit marked improvements upon the introduction of nonlinear prior knowledge compared to nonlinear kernel classifiers that do not utilize such knowledge.

  19. Hydrologic-Based Soil Texture Classification

    NASA Astrophysics Data System (ADS)

    Groenendyk, D.; Ferré, T. P. A.; Thorp, K.; Rice, A. K.; Crow, W. T.

    2014-12-01

    Historically, soil texture classifications have been created using particle sizes with the purpose of describing the agricultural function of different soils. Clustering algorithms allow for a wider range of bases for soil texture classifications that can include combinations of observations, behaviors, and soil attributes. Recent studies that have used non-mechanical based clustering methods show promising results (Bormann 2010, Twarakavi 2010). Here we present a methodology that includes procedures for creating and evaluating multiple classifications. The approach uses a k-means clustering algorithm to classify soil textures across the USDA soil texture triangle. These classifications are based on hydrologic responses to the hydrologic processes of infiltration, drainage, and evapotranspiration. Two similarity indices are introduced to quantify differences among cluster-based classifications, including the USDA soil texture classification as well as each classification. To improve our understanding of the new similarity indices, the methods were tested against commonly used indices which are based on distance (Rand, Jaccard, VI) or membership (Hubert's, Normalized Γ) to compare clusters. Using quantitative measures to compare classifications results in a quick and scalable way to find observations that cluster in similar ways. We propose that this could be a novel approach to identifying measurements that could be characterize hydrologic processes of interest. Intriguingly, some classifications are dissimilar to the USDA soil texture classes. The results are able to capture common but unique hydrologic responses shared by different soil textures that cross multiple USDA soil texture classes. By clustering on a single variable, the resulting clusters have a physical meaning and represent soils that share specific and similar hydrologic responses and behaviors. This provides greater insight into the importance of soil texture heterogeneity and determining relevant

  20. Cloud field classification based on textural features

    NASA Technical Reports Server (NTRS)

    Sengupta, Sailes Kumar

    1989-01-01

    An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes

  1. Nanoparticle-based theranostic agents

    PubMed Central

    Xie, Jin; Lee, Seulki; Chen, Xiaoyuan

    2010-01-01

    Theranostic nanomedicine is emerging as a promising therapeutic paradigm. It takes advantage of the high capacity of nanoplatforms to ferry cargo and loads onto them both imaging and therapeutic functions. The resulting nanosystems, capable of diagnosis, drug delivery and monitoring of therapeutic response, are expected to play a significant role in the dawning era of personalized medicine, and much research effort has been devoted toward that goal. A convenience in constructing such function-integrated agents is that many nanoplatforms are already, themselves, imaging agents. Their well developed surface chemistry makes it easy to load them with pharmaceutics and promote them to be theranostic nanosystems. Iron oxide nanoparticles, quantum dots, carbon nanotubes, gold nanoparticles and silica nanoparticles, have been previously well investigated in the imaging setting and are candidate nanoplatforms for building up nanoparticle-based theranostics. In the current article, we will outline the progress along this line, organized by the category of the core materials. We will focus on construction strategies and will discuss the challenges and opportunities associated with this emerging technology. PMID:20691229

  2. Agent-based enterprise integration

    SciTech Connect

    N. M. Berry; C. M. Pancerella

    1998-12-01

    The authors are developing and deploying software agents in an enterprise information architecture such that the agents manage enterprise resources and facilitate user interaction with these resources. The enterprise agents are built on top of a robust software architecture for data exchange and tool integration across heterogeneous hardware and software. The resulting distributed multi-agent system serves as a method of enhancing enterprises in the following ways: providing users with knowledge about enterprise resources and applications; accessing the dynamically changing enterprise; locating enterprise applications and services; and improving search capabilities for applications and data. Furthermore, agents can access non-agents (i.e., databases and tools) through the enterprise framework. The ultimate target of the effort is the user; they are attempting to increase user productivity in the enterprise. This paper describes their design and early implementation and discusses the planned future work.

  3. Agent-based enterprise integration

    SciTech Connect

    N. M. Berry; C. M. Pancerella

    1999-05-01

    The authors are developing and deploying software agents in an enterprise information architecture such that the agents manage enterprise resources and facilitate user interaction with these resources. Their enterprise agents are built on top of a robust software architecture for data exchange and tool integration across heterogeneous hardware and software. The resulting distributed multi-agent system serves as a method of enhancing enterprises in the following ways: providing users with knowledge about enterprise resources and applications; accessing the dynamically changing enterprise; intelligently locating enterprise applications and services; and improving search capabilities for applications and data. Furthermore, agents can access non-agents (i.e., databases and tools) through the enterprise framework. The ultimate target of their effort is the user; they are attempting to increase user productivity in the enterprise. This paper describes their design and early implementation and discusses their planned future work.

  4. CATS-based Agents That Err

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.

    2002-01-01

    This report describes preliminary research on intelligent agents that make errors. Such agents are crucial to the development of novel agent-based techniques for assessing system safety. The agents extend an agent architecture derived from the Crew Activity Tracking System that has been used as the basis for air traffic controller agents. The report first reviews several error taxonomies. Next, it presents an overview of the air traffic controller agents, then details several mechanisms for causing the agents to err in realistic ways. The report presents a performance assessment of the error-generating agents, and identifies directions for further research. The research was supported by the System-Wide Accident Prevention element of the FAA/NASA Aviation Safety Program.

  5. Wavelet-based rotationally invariant target classification

    NASA Astrophysics Data System (ADS)

    Franques, Victoria T.; Kerr, David A.

    1997-07-01

    In this paper, a novel approach to feature extraction for rotationally invariant object classification is proposed based directly on a discrete wavelet transformation. This form of feature extraction is equivalent to retaining information features while eliminating redundant features from images, which is a critical property when analyzing large, high dimensional images. Usually, researchers have resorted to a data pre-processing method to reduce the size of the feature space prior to classification. The proposed method employs statistical features extracted directly from the wavelet coefficients generated from a three-level subband decomposition system using a set of orthogonal and regular Quadrature Mirror Filters. This algorithm has two desirable properties: (1) It reduces the number of dimensions of the feature space necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; (2) Regardless of the target orientation, the algorithm can perform classification with low error rates. Furthermore, the filters used have performed well in the image compression regime, but they have not been applied to applications in target classification which will be demonstrated in this paper. The results of several classification experiments on variously oriented samples of the visible wavelength targets will be presented.

  6. Prostate segmentation by sparse representation based classification

    PubMed Central

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-01-01

    Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by

  7. Personal Classification Based on Sole Pressure Changes

    NASA Astrophysics Data System (ADS)

    Hata, Yutaka; Yamakawa, Takeshi; Kobashi, Syoji; Kuramoto, Kei; Asari, Kazunari; Taniguchi, Kazuhiko

    Personal classification using sole pressure change is essential for intelligent control, security on home because the users do not need to have anything, such as ID-Card, PIN code and so on. In our study, we propose a personal classification system by sole pressure change obtained by mat type pressure sensor. Mat type pressure sensor is placed on the floor in the entrance of home. We employ four features for classifying each family member and do personal classification based on Euclidean distance based method. As the experimental result on healthy 60 volunteers ranged from 20 to 80 years old, we have evaluated the performance. The results showed that the proposed system successfully classified them and it is especially useful in home intelligent system.

  8. Multimodal Feature-Based Surface Material Classification.

    PubMed

    Strese, Matti; Schuwerk, Clemens; Iepure, Albert; Steinbach, Eckehard

    2017-01-01

    When a tool is tapped on or dragged over an object surface, vibrations are induced in the tool, which can be captured using acceleration sensors. The tool-surface interaction additionally creates audible sound waves, which can be recorded using microphones. Features extracted from camera images provide additional information about the surfaces. We present an approach for tool-mediated surface classification that combines these signals and demonstrate that the proposed method is robust against variable scan-time parameters. We examine freehand recordings of 69 textured surfaces recorded by different users and propose a classification system that uses perception-related features, such as hardness, roughness, and friction; selected features adapted from speech recognition, such as modified cepstral coefficients applied to our acceleration signals; and surface texture-related image features. We focus on mitigating the effect of variable contact force and exploration velocity conditions on these features as a prerequisite for a robust machine-learning-based approach for surface classification. The proposed system works without explicit scan force and velocity measurements. Experimental results show that our proposed approach allows for successful classification of textured surfaces under variable freehand movement conditions, exerted by different human operators. The proposed subset of six features, selected from the described sound, image, friction force, and acceleration features, leads to a classification accuracy of 74 percent in our experiments when combined with a Naive Bayes classifier.

  9. Adaptive learning based heartbeat classification.

    PubMed

    Srinivas, M; Basil, Tony; Mohan, C Krishna

    2015-01-01

    Cardiovascular diseases (CVD) are a leading cause of unnecessary hospital admissions as well as fatalities placing an immense burden on the healthcare industry. A process to provide timely intervention can reduce the morbidity rate as well as control rising costs. Patients with cardiovascular diseases require quick intervention. Towards that end, automated detection of abnormal heartbeats captured by electronic cardiogram (ECG) signals is vital. While cardiologists can identify different heartbeat morphologies quite accurately among different patients, the manual evaluation is tedious and time consuming. In this chapter, we propose new features from the time and frequency domains and furthermore, feature normalization techniques to reduce inter-patient and intra-patient variations in heartbeat cycles. Our results using the adaptive learning based classifier emulate those reported in existing literature and in most cases deliver improved performance, while eliminating the need for labeling of signals by domain experts.

  10. An agent based model of genotype editing

    SciTech Connect

    Rocha, L. M.; Huang, C. F.

    2004-01-01

    This paper presents our investigation on an agent-based model of Genotype Editing. This model is based on several characteristics that are gleaned from the RNA editing system as observed in several organisms. The incorporation of editing mechanisms in an evolutionary agent-based model provides a means for evolving agents with heterogenous post-transcriptional processes. The study of this agent-based genotype-editing model has shed some light into the evolutionary implications of RNA editing as well as established an advantageous evolutionary computation algorithm for machine learning. We expect that our proposed model may both facilitate determining the evolutionary role of RNA editing in biology, and advance the current state of research in agent-based optimization.

  11. 75 FR 7548 - Amendments to the Select Agents Controls in Export Control Classification Number (ECCN) 1C360 on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-22

    ... Bureau of Industry and Security 15 CFR Part 774 RIN 0694-AE67 Amendments to the Select Agents Controls in... (EAR) by revising the controls on certain select agents identified in Export Control Classification... and Quarantine Programs (PPQ) list of select agents and toxins. The changes made by APHIS were part...

  12. Detection and classification of organophosphate nerve agent simulants using support vector machines with multiarray sensors.

    PubMed

    Sadik, Omowunmi; Land, Walker H; Wanekaya, Adam K; Uematsu, Michiko; Embrechts, Mark J; Wong, Lut; Leibensperger, Dale; Volykin, Alex

    2004-01-01

    The need for rapid and accurate detection systems is expanding and the utilization of cross-reactive sensor arrays to detect chemical warfare agents in conjunction with novel computational techniques may prove to be a potential solution to this challenge. We have investigated the detection, prediction, and classification of various organophosphate (OP) nerve agent simulants using sensor arrays with a novel learning scheme known as support vector machines (SVMs). The OPs tested include parathion, malathion, dichlorvos, trichlorfon, paraoxon, and diazinon. A new data reduction software program was written in MATLAB V. 6.1 to extract steady-state and kinetic data from the sensor arrays. The program also creates training sets by mixing and randomly sorting any combination of data categories into both positive and negative cases. The resulting signals were fed into SVM software for "pairwise" and "one" vs all classification. Experimental results for this new paradigm show a significant increase in classification accuracy when compared to artificial neural networks (ANNs). Three kernels, the S2000, the polynomial, and the Gaussian radial basis function (RBF), were tested and compared to the ANN. The following measures of performance were considered in the pairwise classification: receiver operating curve (ROC) Az indices, specificities, and positive predictive values (PPVs). The ROC Az) values, specifities, and PPVs increases ranged from 5% to 25%, 108% to 204%, and 13% to 54%, respectively, in all OP pairs studied when compared to the ANN baseline. Dichlorvos, trichlorfon, and paraoxon were perfectly predicted. Positive prediction for malathion was 95%.

  13. Efficient Agent-Based Cluster Ensembles

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian; Tumer, Kagan

    2006-01-01

    Numerous domains ranging from distributed data acquisition to knowledge reuse need to solve the cluster ensemble problem of combining multiple clusterings into a single unified clustering. Unfortunately current non-agent-based cluster combining methods do not work in a distributed environment, are not robust to corrupted clusterings and require centralized access to all original clusterings. Overcoming these issues will allow cluster ensembles to be used in fundamentally distributed and failure-prone domains such as data acquisition from satellite constellations, in addition to domains demanding confidentiality such as combining clusterings of user profiles. This paper proposes an efficient, distributed, agent-based clustering ensemble method that addresses these issues. In this approach each agent is assigned a small subset of the data and votes on which final cluster its data points should belong to. The final clustering is then evaluated by a global utility, computed in a distributed way. This clustering is also evaluated using an agent-specific utility that is shown to be easier for the agents to maximize. Results show that agents using the agent-specific utility can achieve better performance than traditional non-agent based methods and are effective even when up to 50% of the agents fail.

  14. Development of a rapid method for the automatic classification of biological agents' fluorescence spectral signatures

    NASA Astrophysics Data System (ADS)

    Carestia, Mariachiara; Pizzoferrato, Roberto; Gelfusa, Michela; Cenciarelli, Orlando; Ludovici, Gian Marco; Gabriele, Jessica; Malizia, Andrea; Murari, Andrea; Vega, Jesus; Gaudio, Pasquale

    2015-11-01

    Biosecurity and biosafety are key concerns of modern society. Although nanomaterials are improving the capacities of point detectors, standoff detection still appears to be an open issue. Laser-induced fluorescence of biological agents (BAs) has proved to be one of the most promising optical techniques to achieve early standoff detection, but its strengths and weaknesses are still to be fully investigated. In particular, different BAs tend to have similar fluorescence spectra due to the ubiquity of biological endogenous fluorophores producing a signal in the UV range, making data analysis extremely challenging. The Universal Multi Event Locator (UMEL), a general method based on support vector regression, is commonly used to identify characteristic structures in arrays of data. In the first part of this work, we investigate fluorescence emission spectra of different simulants of BAs and apply UMEL for their automatic classification. In the second part of this work, we elaborate a strategy for the application of UMEL to the discrimination of different BAs' simulants spectra. Through this strategy, it has been possible to discriminate between these BAs' simulants despite the high similarity of their fluorescence spectra. These preliminary results support the use of SVR methods to classify BAs' spectral signatures.

  15. Distance-based classification of keystroke dynamics

    NASA Astrophysics Data System (ADS)

    Tran Nguyen, Ngoc

    2016-07-01

    This paper uses the keystroke dynamics in user authentication. The relationship between the distance metrics and the data template, for the first time, was analyzed and new distance based algorithm for keystroke dynamics classification was proposed. The results of the experiments on the CMU keystroke dynamics benchmark dataset1 were evaluated with an equal error rate of 0.0614. The classifiers using the proposed distance metric outperform existing top performing keystroke dynamics classifiers which use traditional distance metrics.

  16. Speech Segregation based on Binary Classification

    DTIC Science & Technology

    2016-07-15

    Final performance report 3. DATES COVERED (From - To) 5/2012 - 4/2016 4. TITLE AND SUBTITLE Speech Segregation based on Binary Classification 5a...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) The Ohio State University Research Foundation 1960 Kenny...Road Columbus, OH 43210-1063 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Dr. Patrick Bradshaw

  17. Analysis of composition-based metagenomic classification.

    PubMed

    Higashi, Susan; Barreto, André da Motta Salles; Cantão, Maurício Egidio; de Vasconcelos, Ana Tereza Ribeiro

    2012-01-01

    An essential step of a metagenomic study is the taxonomic classification, that is, the identification of the taxonomic lineage of the organisms in a given sample. The taxonomic classification process involves a series of decisions. Currently, in the context of metagenomics, such decisions are usually based on empirical studies that consider one specific type of classifier. In this study we propose a general framework for analyzing the impact that several decisions can have on the classification problem. Instead of focusing on any specific classifier, we define a generic score function that provides a measure of the difficulty of the classification task. Using this framework, we analyze the impact of the following parameters on the taxonomic classification problem: (i) the length of n-mers used to encode the metagenomic sequences, (ii) the similarity measure used to compare sequences, and (iii) the type of taxonomic classification, which can be conventional or hierarchical, depending on whether the classification process occurs in a single shot or in several steps according to the taxonomic tree. We defined a score function that measures the degree of separability of the taxonomic classes under a given configuration induced by the parameters above. We conducted an extensive computational experiment and found out that reasonable values for the parameters of interest could be (i) intermediate values of n, the length of the n-mers; (ii) any similarity measure, because all of them resulted in similar scores; and (iii) the hierarchical strategy, which performed better in all of the cases. As expected, short n-mers generate lower configuration scores because they give rise to frequency vectors that represent distinct sequences in a similar way. On the other hand, large values for n result in sparse frequency vectors that represent differently metagenomic fragments that are in fact similar, also leading to low configuration scores. Regarding the similarity measure, in

  18. Assurance in Agent-Based Systems

    SciTech Connect

    Gilliom, Laura R.; Goldsmith, Steven Y.

    1999-05-10

    Our vision of the future of information systems is one that includes engineered collectives of software agents which are situated in an environment over years and which increasingly improve the performance of the overall system of which they are a part. At a minimum, the movement of agent and multi-agent technology into National Security applications, including their use in information assurance, is apparent today. The use of deliberative, autonomous agents in high-consequence/high-security applications will require a commensurate level of protection and confidence in the predictability of system-level behavior. At Sandia National Laboratories, we have defined and are addressing a research agenda that integrates the surety (safety, security, and reliability) into agent-based systems at a deep level. Surety is addressed at multiple levels: The integrity of individual agents must be protected by addressing potential failure modes and vulnerabilities to malevolent threats. Providing for the surety of the collective requires attention to communications surety issues and mechanisms for identifying and working with trusted collaborators. At the highest level, using agent-based collectives within a large-scale distributed system requires the development of principled design methods to deliver the desired emergent performance or surety characteristics. This position paper will outline the research directions underway at Sandia, will discuss relevant work being performed elsewhere, and will report progress to date toward assurance in agent-based systems.

  19. Ecology Based Decentralized Agent Management System

    NASA Technical Reports Server (NTRS)

    Peysakhov, Maxim D.; Cicirello, Vincent A.; Regli, William C.

    2004-01-01

    The problem of maintaining a desired number of mobile agents on a network is not trivial, especially if we want a completely decentralized solution. Decentralized control makes a system more r e bust and less susceptible to partial failures. The problem is exacerbated on wireless ad hoc networks where host mobility can result in significant changes in the network size and topology. In this paper we propose an ecology-inspired approach to the management of the number of agents. The approach associates agents with living organisms and tasks with food. Agents procreate or die based on the abundance of uncompleted tasks (food). We performed a series of experiments investigating properties of such systems and analyzed their stability under various conditions. We concluded that the ecology based metaphor can be successfully applied to the management of agent populations on wireless ad hoc networks.

  20. [Automatic classification method of star spectrum data based on classification pattern tree].

    PubMed

    Zhao, Xu-Jun; Cai, Jiang-Hui; Zhang, Ji-Fu; Yang, Hai-Feng; Ma, Yang

    2013-10-01

    Frequent pattern, frequently appearing in the data set, plays an important role in data mining. For the stellar spectrum classification tasks, a classification rule mining method based on classification pattern tree is presented on the basis of frequent pattern. The procedures can be shown as follows. Firstly, a new tree structure, i. e., classification pattern tree, is introduced based on the different frequencies of stellar spectral attributes in data base and its different importance used for classification. The related concepts and the construction method of classification pattern tree are also described in this paper. Then, the characteristics of the stellar spectrum are mapped to the classification pattern tree. Two modes of top-to-down and bottom-to-up are used to traverse the classification pattern tree and extract the classification rules. Meanwhile, the concept of pattern capability is introduced to adjust the number of classification rules and improve the construction efficiency of the classification pattern tree. Finally, the SDSS (the Sloan Digital Sky Survey) stellar spectral data provided by the National Astronomical Observatory are used to verify the accuracy of the method. The results show that a higher classification accuracy has been got.

  1. Voxel classification based airway tree segmentation

    NASA Astrophysics Data System (ADS)

    Lo, Pechin; de Bruijne, Marleen

    2008-03-01

    This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.

  2. Texture feature based liver lesion classification

    NASA Astrophysics Data System (ADS)

    Doron, Yeela; Mayer-Wolf, Nitzan; Diamant, Idit; Greenspan, Hayit

    2014-03-01

    Liver lesion classification is a difficult clinical task. Computerized analysis can support clinical workflow by enabling more objective and reproducible evaluation. In this paper, we evaluate the contribution of several types of texture features for a computer-aided diagnostic (CAD) system which automatically classifies liver lesions from CT images. Based on the assumption that liver lesions of various classes differ in their texture characteristics, a variety of texture features were examined as lesion descriptors. Although texture features are often used for this task, there is currently a lack of detailed research focusing on the comparison across different texture features, or their combinations, on a given dataset. In this work we investigated the performance of Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), Gabor, gray level intensity values and Gabor-based LBP (GLBP), where the features are obtained from a given lesion`s region of interest (ROI). For the classification module, SVM and KNN classifiers were examined. Using a single type of texture feature, best result of 91% accuracy, was obtained with Gabor filtering and SVM classification. Combination of Gabor, LBP and Intensity features improved the results to a final accuracy of 97%.

  3. SQL based cardiovascular ultrasound image classification.

    PubMed

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  4. Digital image-based classification of biodiesel.

    PubMed

    Costa, Gean Bezerra; Fernandes, David Douglas Sousa; Almeida, Valber Elias; Araújo, Thomas Souto Policarpo; Melo, Jessica Priscila; Diniz, Paulo Henrique Gonçalves Dias; Véras, Germano

    2015-07-01

    This work proposes a simple, rapid, inexpensive, and non-destructive methodology based on digital images and pattern recognition techniques for classification of biodiesel according to oil type (cottonseed, sunflower, corn, or soybean). For this, differing color histograms in RGB (extracted from digital images), HSI, Grayscale channels, and their combinations were used as analytical information, which was then statistically evaluated using Soft Independent Modeling by Class Analogy (SIMCA), Partial Least Squares Discriminant Analysis (PLS-DA), and variable selection using the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). Despite good performances by the SIMCA and PLS-DA classification models, SPA-LDA provided better results (up to 95% for all approaches) in terms of accuracy, sensitivity, and specificity for both the training and test sets. The variables selected Successive Projections Algorithm clearly contained the information necessary for biodiesel type classification. This is important since a product may exhibit different properties, depending on the feedstock used. Such variations directly influence the quality, and consequently the price. Moreover, intrinsic advantages such as quick analysis, requiring no reagents, and a noteworthy reduction (the avoidance of chemical characterization) of waste generation, all contribute towards the primary objective of green chemistry. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Genome-Based Taxonomic Classification of Bacteroidetes.

    PubMed

    Hahnke, Richard L; Meier-Kolthoff, Jan P; García-López, Marina; Mukherjee, Supratim; Huntemann, Marcel; Ivanova, Natalia N; Woyke, Tanja; Kyrpides, Nikos C; Klenk, Hans-Peter; Göker, Markus

    2016-01-01

    The bacterial phylum Bacteroidetes, characterized by a distinct gliding motility, occurs in a broad variety of ecosystems, habitats, life styles, and physiologies. Accordingly, taxonomic classification of the phylum, based on a limited number of features, proved difficult and controversial in the past, for example, when decisions were based on unresolved phylogenetic trees of the 16S rRNA gene sequence. Here we use a large collection of type-strain genomes from Bacteroidetes and closely related phyla for assessing their taxonomy based on the principles of phylogenetic classification and trees inferred from genome-scale data. No significant conflict between 16S rRNA gene and whole-genome phylogenetic analysis is found, whereas many but not all of the involved taxa are supported as monophyletic groups, particularly in the genome-scale trees. Phenotypic and phylogenomic features support the separation of Balneolaceae as new phylum Balneolaeota from Rhodothermaeota and of Saprospiraceae as new class Saprospiria from Chitinophagia. Epilithonimonas is nested within the older genus Chryseobacterium and without significant phenotypic differences; thus merging the two genera is proposed. Similarly, Vitellibacter is proposed to be included in Aequorivita. Flexibacter is confirmed as being heterogeneous and dissected, yielding six distinct genera. Hallella seregens is a later heterotypic synonym of Prevotella dentalis. Compared to values directly calculated from genome sequences, the G+C content mentioned in many species descriptions is too imprecise; moreover, corrected G+C content values have a significantly better fit to the phylogeny. Corresponding emendations of species descriptions are provided where necessary. Whereas most observed conflict with the current classification of Bacteroidetes is already visible in 16S rRNA gene trees, as expected whole-genome phylogenies are much better resolved.

  6. Genome-Based Taxonomic Classification of Bacteroidetes

    PubMed Central

    Hahnke, Richard L.; Meier-Kolthoff, Jan P.; García-López, Marina; Mukherjee, Supratim; Huntemann, Marcel; Ivanova, Natalia N.; Woyke, Tanja; Kyrpides, Nikos C.; Klenk, Hans-Peter; Göker, Markus

    2016-01-01

    The bacterial phylum Bacteroidetes, characterized by a distinct gliding motility, occurs in a broad variety of ecosystems, habitats, life styles, and physiologies. Accordingly, taxonomic classification of the phylum, based on a limited number of features, proved difficult and controversial in the past, for example, when decisions were based on unresolved phylogenetic trees of the 16S rRNA gene sequence. Here we use a large collection of type-strain genomes from Bacteroidetes and closely related phyla for assessing their taxonomy based on the principles of phylogenetic classification and trees inferred from genome-scale data. No significant conflict between 16S rRNA gene and whole-genome phylogenetic analysis is found, whereas many but not all of the involved taxa are supported as monophyletic groups, particularly in the genome-scale trees. Phenotypic and phylogenomic features support the separation of Balneolaceae as new phylum Balneolaeota from Rhodothermaeota and of Saprospiraceae as new class Saprospiria from Chitinophagia. Epilithonimonas is nested within the older genus Chryseobacterium and without significant phenotypic differences; thus merging the two genera is proposed. Similarly, Vitellibacter is proposed to be included in Aequorivita. Flexibacter is confirmed as being heterogeneous and dissected, yielding six distinct genera. Hallella seregens is a later heterotypic synonym of Prevotella dentalis. Compared to values directly calculated from genome sequences, the G+C content mentioned in many species descriptions is too imprecise; moreover, corrected G+C content values have a significantly better fit to the phylogeny. Corresponding emendations of species descriptions are provided where necessary. Whereas most observed conflict with the current classification of Bacteroidetes is already visible in 16S rRNA gene trees, as expected whole-genome phylogenies are much better resolved. PMID:28066339

  7. Genome-Based Taxonomic Classification of Bacteroidetes

    SciTech Connect

    Hahnke, Richard L.; Meier-Kolthoff, Jan P.; García-López, Marina; Mukherjee, Supratim; Huntemann, Marcel; Ivanova, Natalia N.; Woyke, Tanja; Kyrpides, Nikos C.; Klenk, Hans-Peter; Göker, Markus

    2016-12-20

    The bacterial phylum Bacteroidetes, characterized by a distinct gliding motility, occurs in a broad variety of ecosystems, habitats, life styles, and physiologies. Accordingly, taxonomic classification of the phylum, based on a limited number of features, proved difficult and controversial in the past, for example, when decisions were based on unresolved phylogenetic trees of the 16S rRNA gene sequence. Here we use a large collection of type-strain genomes from Bacteroidetes and closely related phyla for assessing their taxonomy based on the principles of phylogenetic classification and trees inferred from genome-scale data. No significant conflict between 16S rRNA gene and whole-genome phylogenetic analysis is found, whereas many but not all of the involved taxa are supported as monophyletic groups, particularly in the genome-scale trees. Phenotypic and phylogenomic features support the separation of Balneolaceae as new phylum Balneolaeota from Rhodothermaeota and of Saprospiraceae as new class Saprospiria from Chitinophagia. Epilithonimonas is nested within the older genus Chryseobacterium and without significant phenotypic differences; thus merging the two genera is proposed. Similarly, Vitellibacter is proposed to be included in Aequorivita. Flexibacter is confirmed as being heterogeneous and dissected, yielding six distinct genera. Hallella seregens is a later heterotypic synonym of Prevotella dentalis. Compared to values directly calculated from genome sequences, the G+C content mentioned in many species descriptions is too imprecise; moreover, corrected G+C content values have a significantly better fit to the phylogeny. Corresponding emendations of species descriptions are provided where necessary. Whereas most observed conflict with the current classification of Bacteroidetes is already visible in 16S rRNA gene trees, as expected whole-genome phylogenies are much better resolved.

  8. Genome-Based Taxonomic Classification of Bacteroidetes

    DOE PAGES

    Hahnke, Richard L.; Meier-Kolthoff, Jan P.; García-López, Marina; ...

    2016-12-20

    The bacterial phylum Bacteroidetes, characterized by a distinct gliding motility, occurs in a broad variety of ecosystems, habitats, life styles, and physiologies. Accordingly, taxonomic classification of the phylum, based on a limited number of features, proved difficult and controversial in the past, for example, when decisions were based on unresolved phylogenetic trees of the 16S rRNA gene sequence. Here we use a large collection of type-strain genomes from Bacteroidetes and closely related phyla for assessing their taxonomy based on the principles of phylogenetic classification and trees inferred from genome-scale data. No significant conflict between 16S rRNA gene and whole-genome phylogeneticmore » analysis is found, whereas many but not all of the involved taxa are supported as monophyletic groups, particularly in the genome-scale trees. Phenotypic and phylogenomic features support the separation of Balneolaceae as new phylum Balneolaeota from Rhodothermaeota and of Saprospiraceae as new class Saprospiria from Chitinophagia. Epilithonimonas is nested within the older genus Chryseobacterium and without significant phenotypic differences; thus merging the two genera is proposed. Similarly, Vitellibacter is proposed to be included in Aequorivita. Flexibacter is confirmed as being heterogeneous and dissected, yielding six distinct genera. Hallella seregens is a later heterotypic synonym of Prevotella dentalis. Compared to values directly calculated from genome sequences, the G+C content mentioned in many species descriptions is too imprecise; moreover, corrected G+C content values have a significantly better fit to the phylogeny. Corresponding emendations of species descriptions are provided where necessary. Whereas most observed conflict with the current classification of Bacteroidetes is already visible in 16S rRNA gene trees, as expected whole-genome phylogenies are much better resolved.« less

  9. Integration of multi-array sensors and support vector machines for the detection and classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Sadik, Omowunmi A.; Embrechts, Mark J.; Leibensperger, Dale; Wong, Lut; Wanekaya, Adam; Uematsu, Michiko

    2003-08-01

    Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. Furthermore, recent events have highlighted awareness that chemical and biological agents (CBAs) may become the preferred, cheap alternative WMD, because these agents can effectively attack large populations while leaving infrastructures intact. Despite the availability of numerous sensing devices, intelligent hybrid sensors that can detect and degrade CBAs are virtually nonexistent. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using parathion and dichlorvos as model stimulants compounds. SVMs were used for the design and evaluation of new and more accurate data extraction, preprocessing and classification. Experimental results for the paradigms developed using Structural Risk Minimization, show a significant increase in classification accuracy when compared to the existing AromaScan baseline system. Specifically, the results of this research has demonstrated that, for the Parathion versus Dichlorvos pair, when compared to the AromaScan baseline system: (1) a 23% improvement in the overall ROC Az index using the S2000 kernel, with similar improvements with the Gaussian and polynomial (of degree 2) kernels, (2) a significant 173% improvement in specificity with the S2000 kernel. This means that the number of false negative errors were reduced by 173%, while making no false positive errors, when compared to the AromaScan base line performance. (3) The Gaussian and polynomial kernels demonstrated similar specificity at 100% sensitivity. All SVM classifiers provided essentially perfect classification performance for the Dichlorvos versus Trichlorfon pair. For the most difficult classification task, the Parathion versus

  10. Agent-Based Cooperative Control

    DTIC Science & Technology

    2005-12-01

    518. [91] A. Robertson, G. Inalhan, J. P. How, “ Formation control strategies for a separated spacecraft interferometer,” in Proc. of the 1999...100] M. Tillerson and J. P. How, “Advanced guidance algorithms for spacecraft formation -keeping,” in Proc. of the 2002 American Control Conference...based nonlinear control theory. Potential Field Addresses: issues of desired interaction such as coordination, formation , and collision

  11. Sleep stage classification based on respiratory signal.

    PubMed

    Tataraidze, Alexander; Anishchenko, Lesya; Korostovtseva, Lyudmila; Kooij, Bert Jan; Bochkarev, Mikhail; Sviryaev, Yurii

    2015-01-01

    One of the research tasks, which should be solved to develop a sleep monitor, is sleep stages classification. This paper presents an algorithm for wakefulness, rapid eye movement sleep (REM) and non-REM sleep detection based on a set of 33 features, extracted from respiratory inductive plethysmography signal, and bagging classifier. Furthermore, a few heuristics based on knowledge about normal sleep structure are suggested. We used the data from 29 subjects without sleep-related breathing disorders who underwent a PSG study at a sleep laboratory. Subjects were directed to the PSG study due to suspected sleep disorders. A leave-one-subject-out cross-validation procedure was used for testing the classification performance. The accuracy of 77.85 ± 6.63 and Cohen's kappa of 0.59 ± 0.11 were achieved for the classifier. Using heuristics we increased the accuracy to 80.38 ± 8.32 and the kappa to 0.65 ± 0.13. We conclude that heuristics may improve the automated sleep structure detection based on the analysis of indirect information such as respiration signal and are useful for the development of home sleep monitoring system.

  12. Research on Classification of Chinese Text Data Based on SVM

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Yu, Hongzhi; Wan, Fucheng; Xu, Tao

    2017-09-01

    Data Mining has important application value in today’s industry and academia. Text classification is a very important technology in data mining. At present, there are many mature algorithms for text classification. KNN, NB, AB, SVM, decision tree and other classification methods all show good classification performance. Support Vector Machine’ (SVM) classification method is a good classifier in machine learning research. This paper will study the classification effect based on the SVM method in the Chinese text data, and use the support vector machine method in the chinese text to achieve the classify chinese text, and to able to combination of academia and practical application.

  13. A text classification algorithm based on feature weighting

    NASA Astrophysics Data System (ADS)

    Yang, Han; Cui, Honggang; Tang, Hao

    2017-08-01

    The text classification comes down to match according to certain characteristics of the data to be classified. Of course, the complete match is not possible, so the optimal matching result must be selected to complete the classification. Aiming at the shortcomings of the traditional KNN text classification algorithm, a KNN text classification algorithm based on feature weighting is proposed. The algorithm considers the contribution of each dimension to the classification of the model, gives different characteristics to different weights, improves the function of important features, and improves the classification accuracy of the algorithm.

  14. Cirrhosis classification based on texture classification of random features.

    PubMed

    Liu, Hui; Shao, Ying; Guo, Dongmei; Zheng, Yuanjie; Zhao, Zuowei; Qiu, Tianshuang

    2014-01-01

    Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD) can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM) features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage). CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM.

  15. "Chromosome": a knowledge-based system for the chromosome classification.

    PubMed

    Ramstein, G; Bernadet, M

    1993-01-01

    Chromosome, a knowledge-based analysis system has been designed for the classification of human chromosomes. Its aim is to perform an optimal classification by driving a tool box containing the procedures of image processing, pattern recognition and classification. This paper presents the general architecture of Chromosome, based on a multiagent system generator. The image processing tool box is described from the met aphasic enhancement to the fine classification. Emphasis is then put on the knowledge base intended for the chromosome recognition. The global classification process is also presented, showing how Chromosome proceeds to classify a given chromosome. Finally, we discuss further extensions of the system for the karyotype building.

  16. Multimodal Based Classification of Schizophrenia Patients

    PubMed Central

    Cetin, Mustafa S; Houck, Jon M.; Vergara, Victor M.; Miller, Robyn L.; Calhoun, Vince

    2016-01-01

    Schizophrenia is currently diagnosed by physicians through clinical assessment and their evaluation of patient’s self-reported experiences over the longitudinal course of the illness. There is great interest in identifying biologically based markers at the onset of illness, rather than relying on the evolution of symptoms across time. Functional network connectivity shows promise in providing individual subject predictive power. The majority of previous studies considered the analysis of functional connectivity during resting-state using only fMRI. However, exclusive reliance on fMRI to generate such networks, may limit inference on dysfunctional connectivity, which is hypothesized to underlie patient symptoms. In this work, we propose a framework for classification of schizophrenia patients and healthy control subjects based on using both fMRI and band limited envelope correlation metrics in MEG to interrogate functional network components in the resting state. Our results show that the combination of these two methods provide valuable information that captures fundamental characteristics of brain network connectivity in schizophrenia. Such information is useful for prediction of schizophrenia patients. Classification accuracy performance was improved significantly (up to ≈ 7%) relative to only the fMRI method and (up to ≈ 21%) relative to only the MEG method. PMID:26736831

  17. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  18. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  19. Agent Based Modeling Applications for Geosciences

    NASA Astrophysics Data System (ADS)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  20. Classification of Chemicals Based On Structured Toxicity ...

    EPA Pesticide Factsheets

    Thirty years and millions of dollars worth of pesticide registration toxicity studies, historically stored as hardcopy and scanned documents, have been digitized into highly standardized and structured toxicity data within the Toxicity Reference Database (ToxRefDB). Toxicity-based classifications of chemicals were performed as a model application of ToxRefDB. These endpoints will ultimately provide the anchoring toxicity information for the development of predictive models and biological signatures utilizing in vitro assay data. Utilizing query and structured data mining approaches, toxicity profiles were uniformly generated for greater than 300 chemicals. Based on observation rate, species concordance and regulatory relevance, individual and aggregated effects have been selected to classify the chemicals providing a set of predictable endpoints. ToxRefDB exhibits the utility of transforming unstructured toxicity data into structured data and, furthermore, into computable outputs, and serves as a model for applying such data to address modern toxicological problems.

  1. Monsoon Forecasting based on Imbalanced Classification Techniques

    NASA Astrophysics Data System (ADS)

    Ribera, Pedro; Troncoso, Alicia; Asencio-Cortes, Gualberto; Vega, Inmaculada; Gallego, David

    2017-04-01

    Monsoonal systems are quasiperiodic processes of the climatic system that control seasonal precipitation over different regions of the world. The Western North Pacific Summer Monsoon (WNPSM) is one of those monsoons and it is known to have a great impact both over the global climate and over the total precipitation of very densely populated areas. The interannual variability of the WNPSM along the last 50-60 years has been related to different climatic indices such as El Niño, El Niño Modoki, the Indian Ocean Dipole or the Pacific Decadal Oscillation. Recently, a new and longer series characterizing the monthly evolution of the WNPSM, the WNP Directional Index (WNPDI), has been developed, extending its previous length from about 50 years to more than 100 years (1900-2007). Imbalanced classification techniques have been applied to the WNPDI in order to check the capability of traditional climate indices to capture and forecast the evolution of the WNPSM. The problem of forecasting has been transformed into a binary classification problem, in which the positive class represents the occurrence of an extreme monsoon event. Given that the number of extreme monsoons is much lower than the number of non-extreme monsoons, the resultant classification problem is highly imbalanced. The complete dataset is composed of 1296 instances, where only 71 (5.47%) samples correspond to extreme monsoons. Twenty predictor variables based on the cited climatic indices have been proposed, and namely, models based on trees, black box models such as neural networks, support vector machines and nearest neighbors, and finally ensemble-based techniques as random forests have been used in order to forecast the occurrence of extreme monsoons. It can be concluded that the methodology proposed here reports promising results according to the quality parameters evaluated and predicts extreme monsoons for a temporal horizon of a month with a high accuracy. From a climatological point of view

  2. Patterns of Use of an Agent-Based Model and a System Dynamics Model: The Application of Patterns of Use and the Impacts on Learning Outcomes

    ERIC Educational Resources Information Center

    Thompson, Kate; Reimann, Peter

    2010-01-01

    A classification system that was developed for the use of agent-based models was applied to strategies used by school-aged students to interrogate an agent-based model and a system dynamics model. These were compared, and relationships between learning outcomes and the strategies used were also analysed. It was found that the classification system…

  3. NISAC Agent Based Laboratory for Economics

    SciTech Connect

    Downes, Paula; Davis, Chris; Eidson, Eric; Ehlen, Mark; Gieseler, Charles; Harris, Richard

    2006-10-11

    The software provides large-scale microeconomic simulation of complex economic and social systems (such as supply chain and market dynamics of businesses in the US economy) and their dependence on physical infrastructure systems. The system is based on Agent simulation, where each entity of inteest in the system to be modeled (for example, a Bank, individual firms, Consumer households, etc.) is specified in a data-driven sense to be individually repreented by an Agent. The Agents interact using rules of interaction appropriate to their roles, and through those interactions complex economic and social dynamics emerge. The software is implemented in three tiers, a Java-based visualization client, a C++ control mid-tier, and a C++ computational tier.

  4. Graph-based Methods for Orbit Classification

    SciTech Connect

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  5. Agent-Based Automated Algorithm Generator

    DTIC Science & Technology

    2010-01-12

    Detection and Isolation Agent (FDIA), Prognostic Agent (PA), Fusion Agent (FA), and Maintenance Mining Agent (MMA). FDI agents perform diagnostics...manner and loosely coupled). The library of D/P algorithms will be hosted in server-side agents, consisting of four types of major agents: Fault

  6. Pathway-based classification of cancer subtypes

    PubMed Central

    2012-01-01

    Background Molecular markers based on gene expression profiles have been used in experimental and clinical settings to distinguish cancerous tumors in stage, grade, survival time, metastasis, and drug sensitivity. However, most significant gene markers are unstable (not reproducible) among data sets. We introduce a standardized method for representing cancer markers as 2-level hierarchical feature vectors, with a basic gene level as well as a second level of (more stable) pathway markers, for the purpose of discriminating cancer subtypes. This extends standard gene expression arrays with new pathway-level activation features obtained directly from off-the-shelf gene set enrichment algorithms such as GSEA. Such so-called pathway-based expression arrays are significantly more reproducible across datasets. Such reproducibility will be important for clinical usefulness of genomic markers, and augment currently accepted cancer classification protocols. Results The present method produced more stable (reproducible) pathway-based markers for discriminating breast cancer metastasis and ovarian cancer survival time. Between two datasets for breast cancer metastasis, the intersection of standard significant gene biomarkers totaled 7.47% of selected genes, compared to 17.65% using pathway-based markers; the corresponding percentages for ovarian cancer datasets were 20.65% and 33.33% respectively. Three pathways, consisting of Type_1_diabetes mellitus, Cytokine-cytokine_receptor_interaction and Hedgehog_signaling (all previously implicated in cancer), are enriched in both the ovarian long survival and breast non-metastasis groups. In addition, integrating pathway and gene information, we identified five (ID4, ANXA4, CXCL9, MYLK, FBXL7) and six (SQLE, E2F1, PTTG1, TSTA3, BUB1B, MAD2L1) known cancer genes significant for ovarian and breast cancer respectively. Conclusions Standardizing the analysis of genomic data in the process of cancer staging, classification and analysis is

  7. Sentiment classification technology based on Markov logic networks

    NASA Astrophysics Data System (ADS)

    He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe

    2016-07-01

    With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.

  8. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  9. FIPA agent based network distributed control system

    SciTech Connect

    D. Abbott; V. Gyurjyan; G. Heyes; E. Jastrzembski; C. Timmer; E. Wolin

    2003-03-01

    A control system with the capabilities to combine heterogeneous control systems or processes into a uniform homogeneous environment is discussed. This dynamically extensible system is an example of the software system at the agent level of abstraction. This level of abstraction considers agents as atomic entities that communicate to implement the functionality of the control system. Agents' engineering aspects are addressed by adopting the domain independent software standard, formulated by FIPA. Jade core Java classes are used as a FIPA specification implementation. A special, lightweight, XML RDFS based, control oriented, ontology markup language is developed to standardize the description of the arbitrary control system data processor. Control processes, described in this language, are integrated into the global system at runtime, without actual programming. Fault tolerance and recovery issues are also addressed.

  10. Classification techniques based on AI application to defect classification in cast aluminum

    NASA Astrophysics Data System (ADS)

    Platero, Carlos; Fernandez, Carlos; Campoy, Pascual; Aracil, Rafael

    1994-11-01

    This paper describes the Artificial Intelligent techniques applied to the interpretation process of images from cast aluminum surface presenting different defects. The whole process includes on-line defect detection, feature extraction and defect classification. These topics are discussed in depth through the paper. Data preprocessing process, as well as segmentation and feature extraction are described. At this point, algorithms employed along with used descriptors are shown. Syntactic filter has been developed to modelate the information and to generate the input vector to the classification system. Classification of defects is achieved by means of rule-based systems, fuzzy models and neural nets. Different classification subsystems perform together for the resolution of a pattern recognition problem (hybrid systems). Firstly, syntactic methods are used to obtain the filter that reduces the dimension of the input vector to the classification process. Rule-based classification is achieved associating a grammar to each defect type; the knowledge-base will be formed by the information derived from the syntactic filter along with the inferred rules. The fuzzy classification sub-system uses production rules with fuzzy antecedent and their consequents are ownership rates to every defect type. Different architectures of neural nets have been implemented with different results, as shown along the paper. In the higher classification level, the information given by the heterogeneous systems as well as the history of the process is supplied to an Expert System in order to drive the casting process.

  11. Contextual segment-based classification of airborne laser scanner data

    NASA Astrophysics Data System (ADS)

    Vosselman, George; Coenen, Maximilian; Rottensteiner, Franz

    2017-06-01

    Classification of point clouds is needed as a first step in the extraction of various types of geo-information from point clouds. We present a new approach to contextual classification of segmented airborne laser scanning data. Potential advantages of segment-based classification are easily offset by segmentation errors. We combine different point cloud segmentation methods to minimise both under- and over-segmentation. We propose a contextual segment-based classification using a Conditional Random Field. Segment adjacencies are represented by edges in the graphical model and characterised by a range of features of points along the segment borders. A mix of small and large segments allows the interaction between nearby and distant points. Results of the segment-based classification are compared to results of a point-based CRF classification. Whereas only a small advantage of the segment-based classification is observed for the ISPRS Vaihingen dataset with 4-7 points/m2, the percentage of correctly classified points in a 30 points/m2 dataset of Rotterdam amounts to 91.0% for the segment-based classification vs. 82.8% for the point-based classification.

  12. DNA sequence analysis using hierarchical ART-based classification networks

    SciTech Connect

    LeBlanc, C.; Hruska, S.I.; Katholi, C.R.; Unnasch, T.R.

    1994-12-31

    Adaptive resonance theory (ART) describes a class of artificial neural network architectures that act as classification tools which self-organize, work in real-time, and require no retraining to classify novel sequences. We have adapted ART networks to provide support to scientists attempting to categorize tandem repeat DNA fragments from Onchocerca volvulus. In this approach, sequences of DNA fragments are presented to multiple ART-based networks which are linked together into two (or more) tiers; the first provides coarse sequence classification while the sub- sequent tiers refine the classifications as needed. The overall rating of the resulting classification of fragments is measured using statistical techniques based on those introduced to validate results from traditional phylogenetic analysis. Tests of the Hierarchical ART-based Classification Network, or HABclass network, indicate its value as a fast, easy-to-use classification tool which adapts to new data without retraining on previously classified data.

  13. Structure-based algorithms for microvessel classification.

    PubMed

    Smith, Amy F; Secomb, Timothy W; Pries, Axel R; Smith, Nicolas P; Shipley, Rebecca J

    2015-02-01

    Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules. © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd.

  14. Multiscale agent-based consumer market modeling.

    SciTech Connect

    North, M. J.; Macal, C. M.; St. Aubin, J.; Thimmapuram, P.; Bragen, M.; Hahn, J.; Karr, J.; Brigham, N.; Lacy, M. E.; Hampton, D.; Decision and Information Sciences; Procter & Gamble Co.

    2010-05-01

    Consumer markets have been studied in great depth, and many techniques have been used to represent them. These have included regression-based models, logit models, and theoretical market-level models, such as the NBD-Dirichlet approach. Although many important contributions and insights have resulted from studies that relied on these models, there is still a need for a model that could more holistically represent the interdependencies of the decisions made by consumers, retailers, and manufacturers. When the need is for a model that could be used repeatedly over time to support decisions in an industrial setting, it is particularly critical. Although some existing methods can, in principle, represent such complex interdependencies, their capabilities might be outstripped if they had to be used for industrial applications, because of the details this type of modeling requires. However, a complementary method - agent-based modeling - shows promise for addressing these issues. Agent-based models use business-driven rules for individuals (e.g., individual consumer rules for buying items, individual retailer rules for stocking items, or individual firm rules for advertizing items) to determine holistic, system-level outcomes (e.g., to determine if brand X's market share is increasing). We applied agent-based modeling to develop a multi-scale consumer market model. We then conducted calibration, verification, and validation tests of this model. The model was successfully applied by Procter & Gamble to several challenging business problems. In these situations, it directly influenced managerial decision making and produced substantial cost savings.

  15. Agent-based modelling in synthetic biology

    PubMed Central

    2016-01-01

    Biological systems exhibit complex behaviours that emerge at many different levels of organization. These span the regulation of gene expression within single cells to the use of quorum sensing to co-ordinate the action of entire bacterial colonies. Synthetic biology aims to make the engineering of biology easier, offering an opportunity to control natural systems and develop new synthetic systems with useful prescribed behaviours. However, in many cases, it is not understood how individual cells should be programmed to ensure the emergence of a required collective behaviour. Agent-based modelling aims to tackle this problem, offering a framework in which to simulate such systems and explore cellular design rules. In this article, I review the use of agent-based models in synthetic biology, outline the available computational tools, and provide details on recently engineered biological systems that are amenable to this approach. I further highlight the challenges facing this methodology and some of the potential future directions. PMID:27903820

  16. Intelligent Complex Evolutionary Agent-Based Systems

    NASA Astrophysics Data System (ADS)

    Iantovics, Barna; Enǎchescu, Cǎlin

    2009-04-01

    In this paper, we investigate the possibility to develop intelligent agent-based complex systems that use evolutionary learning techniques in order to adapt for the efficient solving of the problems by reorganizing their structure. For this investigation is proposed a complex multiagent system called EAMS (Evolutionary Adaptive Multiagent System), which using an evolutionary learning technique can learn different patterns of reorganization. The realized study proves that evolutionary techniques successfully can be used to create complex multiagent systems capable to intelligently reorganize their structure during their life cycle. The practical establishment of the intelligence of a computational system in generally, an agent-based system in particularly consists in how efficiently and flexibly the system can solve difficult problems.

  17. 2016 Updated MASCC/ESMO consensus recommendations: Emetic risk classification and evaluation of the emetogenicity of antineoplastic agents.

    PubMed

    Jordan, Karin; Chan, Alexandre; Gralla, Richard J; Jahn, Franziska; Rapoport, Bernardo; Warr, David; Hesketh, Paul J

    2017-01-01

    Employing the same framework as in previous guideline updates, antineoplastic agents were classified into four emetic risk categories. The classification of the emetogenic level of new antineoplastic agents, especially for the oral drugs, represents an increasing challenge. Accurate reporting of emetogenicity of new antineoplastic agents in the absence of preventive antiemetic treatment is rarely available. A systematic search was conducted for drugs approved after 2009 until June 2015 using EMBASE and PubMed. The search term was "drug name." The restrictions were language (English records only), date (2009 to 2015), and level of evidence ("clinical trial"). From January 2009 to June 2015, 42 new antineoplastic agents were identified and a systematic search was conducted to identify relevant studies to help define emetic risk levels. The reported incidence of vomiting varied across studies for many agents, but there was adequate evidence to allow 41 of the 42 new antineoplastic agents to be classified according to emetogenic risk. No highly emetogenic agents were identified. Seven moderately emetogenic agents, 26 low emetogenic, agents and eight minimal emetogenic agents were identified and classified accordingly. The MASCC/ESMO update committee also recommended reclassification of the combination of an anthracycline and cyclophosphamide (AC) as highly emetogenic. Despite several limitations, we have attempted to provide a reasonable approximation of the emetic risk associated with new antineoplastic agents through a comprehensive search of the available literature. Hopefully by the next update, more precise information on emetic risk will have been collected during new agent development process.

  18. Agent Based Intelligence in a Tetrahedral Rover

    NASA Technical Reports Server (NTRS)

    Phelps, Peter; Truszkowski, Walt

    2007-01-01

    A tetrahedron is a 4-node 6-strut pyramid structure which is being used by the NASA - Goddard Space Flight Center as the basic building block for a new approach to robotic motion. The struts are extendable; it is by the sequence of activities: strut-extension, changing the center of gravity and falling that the tetrahedron "moves". Currently, strut-extension is handled by human remote control. There is an effort underway to make the movement of the tetrahedron autonomous, driven by an attempt to achieve a goal. The approach being taken is to associate an intelligent agent with each node. Thus, the autonomous tetrahedron is realized as a constrained multi-agent system, where the constraints arise from the fact that between any two agents there is an extendible strut. The hypothesis of this work is that, by proper composition of such automated tetrahedra, robotic structures of various levels of complexity can be developed which will support more complex dynamic motions. This is the basis of the new approach to robotic motion which is under investigation. A Java-based simulator for the single tetrahedron, realized as a constrained multi-agent system, has been developed and evaluated. This paper reports on this project and presents a discussion of the structure and dynamics of the simulator.

  19. Agent-Based Modeling in Systems Pharmacology.

    PubMed

    Cosgrove, J; Butler, J; Alden, K; Read, M; Kumar, V; Cucurull-Sanchez, L; Timmis, J; Coles, M

    2015-11-01

    Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent-based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM-specific strengths have yielded success in the area of preclinical mechanistic modeling.

  20. Knowledge base image classification using P-trees

    NASA Astrophysics Data System (ADS)

    Seetha, M.; Ravi, G.

    2010-02-01

    Image Classification is the process of assigning classes to the pixels in remote sensed images and important for GIS applications, since the classified image is much easier to incorporate than the original unclassified image. To resolve misclassification in traditional parametric classifier like Maximum Likelihood Classifier, the neural network classifier is implemented using back propagation algorithm. The extra spectral and spatial knowledge acquired from the ancillary information is required to improve the accuracy and remove the spectral confusion. To build knowledge base automatically, this paper explores a non-parametric decision tree classifier to extract knowledge from the spatial data in the form of classification rules. A new method is proposed using a data structure called Peano Count Tree (P-tree) for decision tree classification. The Peano Count Tree is a spatial data organization that provides a lossless compressed representation of a spatial data set and facilitates efficient classification than other data mining techniques. The accuracy is assessed using the parameters overall accuracy, User's accuracy and Producer's accuracy for image classification methods of Maximum Likelihood Classification, neural network classification using back propagation, Knowledge Base Classification, Post classification and P-tree Classifier. The results reveal that the knowledge extracted from decision tree classifier and P-tree data structure from proposed approach remove the problem of spectral confusion to a greater extent. It is ascertained that the P-tree classifier surpasses the other classification techniques.

  1. Spectral-Spatial Hyperspectral Image Classification Based on KNN

    NASA Astrophysics Data System (ADS)

    Huang, Kunshan; Li, Shutao; Kang, Xudong; Fang, Leyuan

    2016-12-01

    Fusion of spectral and spatial information is an effective way in improving the accuracy of hyperspectral image classification. In this paper, a novel spectral-spatial hyperspectral image classification method based on K nearest neighbor (KNN) is proposed, which consists of the following steps. First, the support vector machine is adopted to obtain the initial classification probability maps which reflect the probability that each hyperspectral pixel belongs to different classes. Then, the obtained pixel-wise probability maps are refined with the proposed KNN filtering algorithm that is based on matching and averaging nonlocal neighborhoods. The proposed method does not need sophisticated segmentation and optimization strategies while still being able to make full use of the nonlocal principle of real images by using KNN, and thus, providing competitive classification with fast computation. Experiments performed on two real hyperspectral data sets show that the classification results obtained by the proposed method are comparable to several recently proposed hyperspectral image classification methods.

  2. CATS-based Air Traffic Controller Agents

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.

    2002-01-01

    This report describes intelligent agents that function as air traffic controllers. Each agent controls traffic in a single sector in real time; agents controlling traffic in adjoining sectors can coordinate to manage an arrival flow across a given meter fix. The purpose of this research is threefold. First, it seeks to study the design of agents for controlling complex systems. In particular, it investigates agent planning and reactive control functionality in a dynamic environment in which a variety perceptual and decision making skills play a central role. It examines how heuristic rules can be applied to model planning and decision making skills, rather than attempting to apply optimization methods. Thus, the research attempts to develop intelligent agents that provide an approximation of human air traffic controller behavior that, while not based on an explicit cognitive model, does produce task performance consistent with the way human air traffic controllers operate. Second, this research sought to extend previous research on using the Crew Activity Tracking System (CATS) as the basis for intelligent agents. The agents use a high-level model of air traffic controller activities to structure the control task. To execute an activity in the CATS model, according to the current task context, the agents reference a 'skill library' and 'control rules' that in turn execute the pattern recognition, planning, and decision-making required to perform the activity. Applying the skills enables the agents to modify their representation of the current control situation (i.e., the 'flick' or 'picture'). The updated representation supports the next activity in a cycle of action that, taken as a whole, simulates air traffic controller behavior. A third, practical motivation for this research is to use intelligent agents to support evaluation of new air traffic control (ATC) methods to support new Air Traffic Management (ATM) concepts. Current approaches that use large, human

  3. [EEG signal classification based on EMD and SVM].

    PubMed

    Li, Shufang; Zhou, Weidong; Cai, Dongmei; Liu, Kai; Zhao, Jianlin

    2011-10-01

    The automatic detection and classification of EEG epileptic wave have great clinical significance. This paper proposes an empirical mode decomposition (EMD) and support vector machine (SVM) based classification method for non-stationary EEG. Firstly, EMD was used to decompose EEG into multiple empirical mode components. Secondly, effective features were extracted from the scales. Finally, the EEG was classified with SVM. The experiment indicated that this method could achieve good classification result with accuracy of 99 % for interictal and ictal EEGs.

  4. Classification

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2011-01-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. In supervised learning, a set of training examples---examples with known output values---is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate's measurements. This chapter discusses methods to perform machine learning, with examples involving astronomy.

  5. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  6. Supervised classification of protein structures based on convex hull representation.

    PubMed

    Wang, Yong; Wu, Ling-Yun; Chen, Luonan; Zhang, Xiang-Sun

    2007-01-01

    One of the central problems in functional genomics is to establish the classification schemes of protein structures. In this paper the relationship of protein structures is uncovered within the framework of supervised learning. Specifically, the novel patterns based on convex hull representation are firstly extracted from a protein structure, then the classification system is constructed and machine learning methods such as neural networks, Hidden Markov Models (HMM) and Support Vector Machines (SVMs) are applied. The CATH scheme is highlighted in the classification experiments. The results indicate that the proposed supervised classification scheme is effective and efficient.

  7. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    PubMed

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  8. Comparison Between Revised Atlanta Classification and Determinant-Based Classification for Acute Pancreatitis in Intensive Care Medicine. Why Do Not Use a Modified Determinant-Based Classification?

    PubMed

    Zubia-Olaskoaga, Felix; Maraví-Poma, Enrique; Urreta-Barallobre, Iratxe; Ramírez-Puerta, María-Rosario; Mourelo-Fariña, Mónica; Marcos-Neira, María-Pilar

    2016-05-01

    To compare the classification performance of the Revised Atlanta Classification, the Determinant-Based Classification, and a new modified Determinant-Based Classification according to observed mortality and morbidity. A prospective multicenter observational study conducted in 1-year period. Forty-six international ICUs (Epidemiology of Acute Pancreatitis in Intensive Care Medicine study). Admitted to an ICU with acute pancreatitis and at least one organ failure. Modified Determinant-Based Classification included four categories: In group 1, patients with transient organ failure and without local complications; in group 2, patients with transient organ failure and local complications; in group 3, patients with persistent organ failure and without local complications; and in group 4, patients with persistent organ failure and local complications. A total of 374 patients were included (mortality rate of 28.9%). When modified Determinant-Based Classification was applied, patients in group 1 presented low mortality (2.26%) and morbidity (5.38%), patients in group 2 presented low mortality (6.67%) and high morbidity (60.71%), patients in group 3 presented high mortality (41.46%) and low morbidity (8.33%), and patients in group 4 presented high mortality (59.09%) and morbidity (88.89%). The area under the receiver operator characteristics curve of modified Determinant-Based Classification for mortality was 0.81 (95% CI, 0.77-0.85), with significant differences in comparison to Revised Atlanta Classification (0.77; 95% CI, 0.73-0.81; p < 0.01), and Determinant-Based Classification (0.77; 95% CI, 0.72-0.81; p < 0.01). For morbidity, the area under the curve of modified Determinant-Based Classification was 0.80 (95% CI, 0.73-0.86), with significant differences in comparison to Revised Atlanta Classification (0.63, 95% CI, 0.57-0.70; p < 0.01), but not in comparison to Determinant-Based Classification (0.81, 95% CI, 0.74-0.88; nonsignificant). Modified Determinant-Based

  9. a Curvature Based Adaptive Neighborhood for Individual Point Cloud Classification

    NASA Astrophysics Data System (ADS)

    He, E.; Chen, Q.; Wang, H.; Liu, X.

    2017-09-01

    As a key step in 3D scene analysis, point cloud classification has gained a great deal of concerns in the past few years. Due to the uneven density, noise and data missing in point cloud, how to automatically classify the point cloud with a high precision is a very challenging task. The point cloud classification process typically includes the extraction of neighborhood based statistical information and machine learning algorithms. However, the robustness of neighborhood is limited to the density and curvature of the point cloud which lead to a label noise behavior in classification results. In this paper, we proposed a curvature based adaptive neighborhood for individual point cloud classification. Our main improvement is the curvature based adaptive neighborhood method, which could derive ideal 3D point local neighborhood and enhance the separability of features. The experiment result on Oakland benchmark dataset shows that the proposed method can effectively improve the classification accuracy of point cloud.

  10. A Curriculum-Based Classification System for Community Colleges.

    ERIC Educational Resources Information Center

    Schuyler, Gwyer

    2003-01-01

    Proposes and tests a community college classification system based on curricular characteristics and their association with institutional characteristics. Seeks readily available data correlates to represent percentage of a college's course offerings that are in the liberal arts. A simple two-category classification system using total enrollment…

  11. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  12. Error Generation in CATS-Based Agents

    NASA Technical Reports Server (NTRS)

    Callantine, Todd

    2003-01-01

    This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.

  13. Lipid-based antifungal agents: current status.

    PubMed

    Arikan, S; Rex, J H

    2001-03-01

    Immunocompromised patients are well known to be predisposed to developing invasive fungal infections. These infections are usually difficult to diagnose and more importantly, the resulting mortality rate is high. The limited number of antifungal agents available and their high rate of toxicity are the major factors complicating the issue. However, the development of lipid-based formulations of existing antifungal agents has opened a new era in antifungal therapy. The best examples are the lipid-based amphotericin B preparations, amphotericin B lipid complex (ABLC; Abelcet), amphotericin B colloidal dispersion (ABCD; Amphotec or Amphocil), and liposomal amphotericin B (AmBisome). These formulations have shown that antifungal activity is maintained while toxicity is reduced. This progress is followed by the incorporation of nystatin into liposomes. Liposomal nystatin formulation is under development and studies of it have provided encouraging data. Finally, lipid-based formulations of hamycin, miconazole, and ketoconazole have been developed but remain experimental. Advances in technology of liposomes and other lipid formulations have provided promising new tools for management of fungal infections.

  14. Metasample-based sparse representation for tumor classification.

    PubMed

    Zheng, Chun-Hou; Zhang, Lei; Ng, To-Yee; Shiu, Simon C K; Huang, De-Shuang

    2011-01-01

    A reliable and accurate identification of the type of tumors is crucial to the proper treatment of cancers. In recent years, it has been shown that sparse representation (SR) by l1-norm minimization is robust to noise, outliers and even incomplete measurements, and SR has been successfully used for classification. This paper presents a new SR-based method for tumor classification using gene expression data. A set of metasamples are extracted from the training samples, and then an input testing sample is represented as the linear combination of these metasamples by l1-regularized least square method. Classification is achieved by using a discriminating function defined on the representation coefficients. Since l1-norm minimization leads to a sparse solution, the proposed method is called metasample-based SR classification (MSRC). Extensive experiments on publicly available gene expression data sets show that MSRC is efficient for tumor classification, achieving higher accuracy than many existing representative schemes.

  15. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  16. Behavior Based Social Dimensions Extraction for Multi-Label Classification.

    PubMed

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes' behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes' connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions.

  17. Information theoretic entropy for molecular classification: oxadiazolamines as potential therapeutic agents.

    PubMed

    Torrens, Francisco; Castellano, Gloria

    2013-06-01

    In this review we present algorithms for classification and taxonomy based on information entropy, followed by structure-activity relationship (SAR) models for the inhibition of human prostate carcinoma cell line DU-145 by 26 derivatives of N-aryl-N-(3-aryl-1,2,4-oxadiazol-5-yl)amines (NNAs). The NNAs are classified using two characteristic chemical properties based on different regions of the molecules. A table of periodic properties of inhibitors of DU-145 human prostate carcinoma cell line is obtained based on structural features from the amine moiety and from the oxadiazole ring. Inhibitors in the same group and period of the periodic table are predicted to have highly similar properties, and those located only in the same group will present moderate similarity. The results of a virtual screening campaign are presented.

  18. Agent-based modelling in synthetic biology.

    PubMed

    Gorochowski, Thomas E

    2016-11-30

    Biological systems exhibit complex behaviours that emerge at many different levels of organization. These span the regulation of gene expression within single cells to the use of quorum sensing to co-ordinate the action of entire bacterial colonies. Synthetic biology aims to make the engineering of biology easier, offering an opportunity to control natural systems and develop new synthetic systems with useful prescribed behaviours. However, in many cases, it is not understood how individual cells should be programmed to ensure the emergence of a required collective behaviour. Agent-based modelling aims to tackle this problem, offering a framework in which to simulate such systems and explore cellular design rules. In this article, I review the use of agent-based models in synthetic biology, outline the available computational tools, and provide details on recently engineered biological systems that are amenable to this approach. I further highlight the challenges facing this methodology and some of the potential future directions. © 2016 The Author(s).

  19. Classification

    ERIC Educational Resources Information Center

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  20. Classification

    ERIC Educational Resources Information Center

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  1. The Atlanta Classification, Revised Atlanta Classification, and Determinant-Based Classification of Acute Pancreatitis: Which Is Best at Stratifying Outcomes?

    PubMed

    Kadiyala, Vivek; Suleiman, Shadeah L; McNabb-Baltar, Julia; Wu, Bechien U; Banks, Peter A; Singh, Vikesh K

    2016-04-01

    To determine which classification is more accurate in stratifying severity. The study used a retrospective analysis of a prospective acute pancreatitis database (June 2005-December 2007). Acute pancreatitis severity was stratified according to the Atlanta classification (AC) 1992, the revised Atlanta classification (RAC) 2012, and the determinant-based classification (DBC) 2012. Receiver operating characteristic analysis (area under the curve) compared the accuracy of each classification. Logistic regression identified predictors of mortality. 338 patients were analyzed: 13% had persistent organ failure (POF) (>48 hours), of whom 37% had multisystem POF, and 11% had pancreatic necrosis, of whom 19% had infected necrosis. Mortality was 4.1%. For predicting mortality (area under the curve), the RAC (0.91) and DBC (0.92) were comparable (P = 0.404); both outperformed the AC (0.81) (P < 0.001). For intensive care unit admission, the RAC (0.85) and DBC (0.85) were comparable (P = 0.949); both outperformed the AC (0.79) (P < 0.05). There were 2 patients in the critical category of the DBC. Multisystem POF was an independent predictor of mortality (odds ratio, 75.0; 95% confidence interval, 13.7-410.6; P < 0.001), whereas single-system POF, sterile necrosis, and infected necrosis were not. The RAC and DBC were generally comparable in stratifying severity. The paucity of patients in the critical category in the DBC limits its utility. Neither classification accounts for the impact of multisystem POF, which was the strongest predictor of mortality.

  2. Agent Based Modeling as an Educational Tool

    NASA Astrophysics Data System (ADS)

    Fuller, J. H.; Johnson, R.; Castillo, V.

    2012-12-01

    Motivation is a key element in high school education. One way to improve motivation and provide content, while helping address critical thinking and problem solving skills, is to have students build and study agent based models in the classroom. This activity visually connects concepts with their applied mathematical representation. "Engaging students in constructing models may provide a bridge between frequently disconnected conceptual and mathematical forms of knowledge." (Levy and Wilensky, 2011) We wanted to discover the feasibility of implementing a model based curriculum in the classroom given current and anticipated core and content standards.; Simulation using California GIS data ; Simulation of high school student lunch popularity using aerial photograph on top of terrain value map.

  3. Tensor Modeling Based for Airborne LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    Li, N.; Liu, C.; Pfeifer, N.; Yin, J. F.; Liao, Z. Y.; Zhou, Y.

    2016-06-01

    Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the "raw" data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  4. Texel-based image classification with orthogonal bases

    NASA Astrophysics Data System (ADS)

    Carbajal-Degante, Erik; Nava, Rodrigo; Olveres, Jimena; Escalante-Ramírez, Boris; Kybic, Jan

    2016-04-01

    Periodic variations in patterns within a group of pixels provide important information about the surface of interest and can be used to identify objects or regions. Hence, a proper analysis can be applied to extract particular features according to some specific image properties. Recently, texture analysis using orthogonal polynomials has gained attention since polynomials characterize the pseudo-periodic behavior of textures through the projection of the pattern of interest over a group of kernel functions. However, the maximum polynomial order is often linked to the size of the texture, which implies in many cases, a complex calculation and introduces instability in higher orders leading to computational errors. In this paper, we address this issue and explore a pre-processing stage to compute the optimal size of the window of analysis called "texel." We propose Haralick-based metrics to find the main oscillation period, such that, it represents the fundamental texture and captures the minimum information, which is sufficient for classification tasks. This procedure avoids the computation of large polynomials and reduces substantially the feature space with small classification errors. Our proposal is also compared against different fixed-size windows. We also show similarities between full-image representations and the ones based on texels in terms of visual structures and feature vectors using two different orthogonal bases: Tchebichef and Hermite polynomials. Finally, we assess the performance of the proposal using well-known texture databases found in the literature.

  5. Intelligent Agent Feasibility Study. Volume 1: Agent-based System Technology

    DTIC Science & Technology

    1998-02-01

    ambitious in its scope. In OAA (Moran, Cheyer, Julia , Martin, 10 & Park, 1997), agents can operate on multiple platforms across a network, new agents can be...find the source and best price for a given item. This area of electronic commerce has been an active area for research in agent-based systems ( Chavez ...D. (1993). Towards a taxonomy of multi-agent systems. International Journal of Man-Machine Studies 36, 689-704. Chavez , A., Dreilinger, D., Guttman, R

  6. A Classification-based Review Recommender

    NASA Astrophysics Data System (ADS)

    O'Mahony, Michael P.; Smyth, Barry

    Many online stores encourage their users to submit product/service reviews in order to guide future purchasing decisions. These reviews are often listed alongside product recommendations but, to date, limited attention has been paid as to how best to present these reviews to the end-user. In this paper, we describe a supervised classification approach that is designed to identify and recommend the most helpful product reviews. Using the TripAdvisor service as a case study, we compare the performance of several classification techniques using a range of features derived from hotel reviews. We then describe how these classifiers can be used as the basis for a practical recommender that automatically suggests the mosthelpful contrasting reviews to end-users. We present an empirical evaluation which shows that our approach achieves a statistically significant improvement over alternative review ranking schemes.

  7. Agent-based models of financial markets

    NASA Astrophysics Data System (ADS)

    Samanidou, E.; Zschischang, E.; Stauffer, D.; Lux, T.

    2007-03-01

    This review deals with several microscopic ('agent-based') models of financial markets which have been studied by economists and physicists over the last decade: Kim-Markowitz, Levy-Levy-Solomon, Cont-Bouchaud, Solomon-Weisbuch, Lux-Marchesi, Donangelo-Sneppen and Solomon-Levy-Huang. After an overview of simulation approaches in financial economics, we first give a summary of the Donangelo-Sneppen model of monetary exchange and compare it with related models in economics literature. Our selective review then outlines the main ingredients of some influential early models of multi-agent dynamics in financial markets (Kim-Markowitz, Levy-Levy-Solomon). As will be seen, these contributions draw their inspiration from the complex appearance of investors' interactions in real-life markets. Their main aim is to reproduce (and, thereby, provide possible explanations) for the spectacular bubbles and crashes seen in certain historical episodes, but they lack (like almost all the work before 1998 or so) a perspective in terms of the universal statistical features of financial time series. In fact, awareness of a set of such regularities (power-law tails of the distribution of returns, temporal scaling of volatility) only gradually appeared over the nineties. With the more precise description of the formerly relatively vague characteristics (e.g. moving from the notion of fat tails to the more concrete one of a power law with index around three), it became clear that financial market dynamics give rise to some kind of universal scaling law. Showing similarities with scaling laws for other systems with many interacting sub-units, an exploration of financial markets as multi-agent systems appeared to be a natural consequence. This topic has been pursued by quite a number of contributions appearing in both the physics and economics literature since the late nineties. From the wealth of different flavours of multi-agent models that have appeared up to now, we discuss the Cont

  8. Better image texture recognition based on SVM classification

    NASA Astrophysics Data System (ADS)

    Liu, Kuan; Lu, Bin; Wei, Yaxun

    2013-10-01

    Texture classification is very important in remote sensing images, X-ray photos, cell image interpretation and processing, and is also the active research areas of computer vision, image processing, image analysis, image retrieval, and so on. As to spatial domain image, texture analysis can use statistical methods to calculate the texture feature vector. In this paper, we use the gray level co-occurrence matrix and Gabor filter feature vector to calculate the feature vector. For the feature vector classification under normal circumstances we can use Bayesian method, KNN method, BP neural network. In this paper, we use a statistical classification method which is based on SVM method to classify images. Image classification generally includes image preprocessing, image feature extraction, image feature selection and image classification in four steps. In this paper, we use a gray-scale image, by calculating the image gray level cooccurrence matrix and Gabor filtering method to get feature extraction, and then use SVM to training and classification. From the test results, it shows that the SVM method is the better way to solve the problem of texture features for image classification and it shows strong adaptability and robustness for image classification.

  9. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  10. Agent Based Model of Livestock Movements

    NASA Astrophysics Data System (ADS)

    Miron, D. J.; Emelyanova, I. V.; Donald, G. E.; Garner, G. M.

    The modelling of livestock movements within Australia is of national importance for the purposes of the management and control of exotic disease spread, infrastructure development and the economic forecasting of livestock markets. In this paper an agent based model for the forecasting of livestock movements is presented. This models livestock movements from farm to farm through a saleyard. The decision of farmers to sell or buy cattle is often complex and involves many factors such as climate forecast, commodity prices, the type of farm enterprise, the number of animals available and associated off-shore effects. In this model the farm agent's intelligence is implemented using a fuzzy decision tree that utilises two of these factors. These two factors are the livestock price fetched at the last sale and the number of stock on the farm. On each iteration of the model farms choose either to buy, sell or abstain from the market thus creating an artificial supply and demand. The buyers and sellers then congregate at the saleyard where livestock are auctioned using a second price sealed bid. The price time series output by the model exhibits properties similar to those found in real livestock markets.

  11. Agent-based modeling in ecological economics.

    PubMed

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems.

  12. Adaptive BCI based on software agents.

    PubMed

    Castillo-Garcia, Javier; Cotrina, Anibal; Benevides, Alessandro; Delisle-Rodriguez, Denis; Longo, Berthil; Caicedo, Eduardo; Ferreira, Andre; Bastos, Teodiano

    2014-01-01

    The selection of features is generally the most difficult field to model in BCIs. Therefore, time and effort are invested in individual feature selection prior to data set training. Another great difficulty regarding the model of the BCI topology is the brain signal variability between users. How should this topology be in order to implement a system that can be used by large number of users with an optimal set of features? The proposal presented in this paper allows for obtaining feature reduction and classifier selection based on software agents. The software agents contain Genetic Algorithms (GA) and a cost function. GA used entropy and mutual information to choose the number of features. For the classifier selection a cost function was defined. Success rate and Cohen's Kappa coefficient are used as parameters to evaluate the classifiers performance. The obtained results allow finding a topology represented as a neural model for an adaptive BCI, where the number of the channels, features and the classifier are interrelated. The minimal subset of features and the optimal classifier were obtained with the adaptive BCI. Only three EEG channels were needed to obtain a success rate of 93% for the BCI competition III data set IVa.

  13. Agent-based modeling of complex infrastructures

    SciTech Connect

    North, M. J.

    2001-06-01

    Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.

  14. Adenosine monophosphate-activated protein kinase-based classification of diabetes pharmacotherapy.

    PubMed

    Dutta, D; Kalra, S; Sharma, M

    2016-09-21

    The current classification of both diabetes and antidiabetes medication is complex, preventing a treating physician from choosing the most appropriate treatment for an individual patient, sometimes resulting in patient-drug mismatch. We propose a novel, simple systematic classification of drugs, based on their effect on adenosine monophosphate-activated protein kinase (AMPK). AMPK is the master regular of energy metabolism, an energy sensor, activated when cellular energy levels are low, resulting in activation of catabolic process, and inactivation of anabolic process, having a beneficial effect on glycemia in diabetes. This listing of drugs makes it easier for students and practitioners to analyze drug profiles and match them with patient requirements. It also facilitates choice of rational combinations, with complementary modes of action. Drugs are classified as stimulators, inhibitors, mixed action, possible action, and no action on AMPK activity. Metformin and glitazones are pure stimulators of AMPK. Incretin-based therapies have a mixed action on AMPK. Sulfonylureas either inhibit AMPK or have no effect on AMPK. Glycemic efficacy of alpha-glucosidase inhibitors, sodium glucose co-transporter-2 inhibitor, colesevelam, and bromocriptine may also involve AMPK activation, which warrants further evaluation. Berberine, salicylates, and resveratrol are newer promising agents in the management of diabetes, having well-documented evidence of AMPK stimulation medicated glycemic efficacy. Hence, AMPK-based classification of antidiabetes medications provides a holistic unifying understanding of pharmacotherapy in diabetes. This classification is flexible with a scope for inclusion of promising agents of future.

  15. Epiretinal membrane: optical coherence tomography-based diagnosis and classification

    PubMed Central

    Stevenson, William; Prospero Ponce, Claudia M; Agarwal, Daniel R; Gelman, Rachel; Christoforidis, John B

    2016-01-01

    Epiretinal membrane (ERM) is a disorder of the vitreomacular interface characterized by symptoms of decreased visual acuity and metamorphopsia. The diagnosis and classification of ERM has traditionally been based on clinical examination findings. However, modern optical coherence tomography (OCT) has proven to be more sensitive than clinical examination for the diagnosis of ERM. Furthermore, OCT-derived findings, such as central foveal thickness and inner segment ellipsoid band integrity, have shown clinical relevance in the setting of ERM. To date, no OCT-based ERM classification scheme has been widely accepted for use in clinical practice and investigation. Herein, we review the pathogenesis, diagnosis, and classification of ERMs and propose an OCT-based ERM classification system. PMID:27099458

  16. Epiretinal membrane: optical coherence tomography-based diagnosis and classification.

    PubMed

    Stevenson, William; Prospero Ponce, Claudia M; Agarwal, Daniel R; Gelman, Rachel; Christoforidis, John B

    2016-01-01

    Epiretinal membrane (ERM) is a disorder of the vitreomacular interface characterized by symptoms of decreased visual acuity and metamorphopsia. The diagnosis and classification of ERM has traditionally been based on clinical examination findings. However, modern optical coherence tomography (OCT) has proven to be more sensitive than clinical examination for the diagnosis of ERM. Furthermore, OCT-derived findings, such as central foveal thickness and inner segment ellipsoid band integrity, have shown clinical relevance in the setting of ERM. To date, no OCT-based ERM classification scheme has been widely accepted for use in clinical practice and investigation. Herein, we review the pathogenesis, diagnosis, and classification of ERMs and propose an OCT-based ERM classification system.

  17. From Agents to Continuous Change via Aesthetics: Learning Mechanics with Visual Agent-Based Computational Modeling

    ERIC Educational Resources Information Center

    Sengupta, Pratim; Farris, Amy Voss; Wright, Mason

    2012-01-01

    Novice learners find motion as a continuous process of change challenging to understand. In this paper, we present a pedagogical approach based on agent-based, visual programming to address this issue. Integrating agent-based programming, in particular, Logo programming, with curricular science has been shown to be challenging in previous research…

  18. From Agents to Continuous Change via Aesthetics: Learning Mechanics with Visual Agent-Based Computational Modeling

    ERIC Educational Resources Information Center

    Sengupta, Pratim; Farris, Amy Voss; Wright, Mason

    2012-01-01

    Novice learners find motion as a continuous process of change challenging to understand. In this paper, we present a pedagogical approach based on agent-based, visual programming to address this issue. Integrating agent-based programming, in particular, Logo programming, with curricular science has been shown to be challenging in previous research…

  19. Comparison and Analysis of Biological Agent Category Lists Based On Biosafety and Biodefense

    PubMed Central

    Tian, Deqiao; Zheng, Tao

    2014-01-01

    Biological agents pose a serious threat to human health, economic development, social stability and even national security. The classification of biological agents is a basic requirement for both biosafety and biodefense. We compared and analyzed the Biological Agent Laboratory Biosafety Category list and the defining criteria according to the World Health Organization (WHO), the National Institutes of Health (NIH), the European Union (EU) and China. We also compared and analyzed the Biological Agent Biodefense Category list and the defining criteria according to the Centers for Disease Control and Prevention (CDC) of the United States, the EU and Russia. The results show some inconsistencies among or between the two types of category lists and criteria. We suggest that the classification of biological agents based on laboratory biosafety should reduce the number of inconsistencies and contradictions. Developing countries should also produce lists of biological agents to direct their development of biodefense capabilities.To develop a suitable biological agent list should also strengthen international collaboration and cooperation. PMID:24979754

  20. Agent based modeling in tactical wargaming

    NASA Astrophysics Data System (ADS)

    James, Alex; Hanratty, Timothy P.; Tuttle, Daniel C.; Coles, John B.

    2016-05-01

    Army staffs at division, brigade, and battalion levels often plan for contingency operations. As such, analysts consider the impact and potential consequences of actions taken. The Army Military Decision-Making Process (MDMP) dictates identification and evaluation of possible enemy courses of action; however, non-state actors often do not exhibit the same level and consistency of planned actions that the MDMP was originally designed to anticipate. The fourth MDMP step is a particular challenge, wargaming courses of action within the context of complex social-cultural behaviors. Agent-based Modeling (ABM) and its resulting emergent behavior is a potential solution to model terrain in terms of the human domain and improve the results and rigor of the traditional wargaming process.

  1. Classification systems in Gestational trophoblastic neoplasia - Sentiment or evidenced based?

    PubMed

    Parker, V L; Pacey, A A; Palmer, J E; Tidy, J A; Winter, M C; Hancock, B W

    2017-05-01

    The classification system for Gestational trophoblastic neoplasia (GTN) has proved a controversial topic for over 100years. Numerous systems simultaneously existed in different countries, with three main rival classifications gaining popularity, namely histological, anatomical and clinical prognostic systems. Until 2000, prior to the combination of the FIGO and WHO classifications, there was no worldwide consensus on the optimal classification system, largely due to a lack of high quality data proving the merit of one system over another. Remarkably, a validated, prospectively tested classification system is yet to be conducted. Over time, increasing criticisms have emerged regarding the currently adopted combined FIGO/WHO classification system, and its ability to identify patients most likely to develop primary chemotherapy resistance or disease relapse. This is particularly pertinent for patients with low-risk disease, whereby one in three patients are resistant to first line therapy, rising to four out of five women who score 5 or 6. This review aims to examine the historical basis of the GTN classification systems and critically appraise the evidence on which they were based. This culminates in a critique of the current FIGO/WHO prognostic system and discussion surrounding clinical preference versus evidence based practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Who's your neighbor? neighbor identification for agent-based modeling.

    SciTech Connect

    Macal, C. M.; Howe, T. R.; Decision and Information Sciences; Univ. of Chicago

    2006-01-01

    Agent-based modeling and simulation, based on the cellular automata paradigm, is an approach to modeling complex systems comprised of interacting autonomous agents. Open questions in agent-based simulation focus on scale-up issues encountered in simulating large numbers of agents. Specifically, how many agents can be included in a workable agent-based simulation? One of the basic tenets of agent-based modeling and simulation is that agents only interact and exchange locally available information with other agents located in their immediate proximity or neighborhood of the space in which the agents are situated. Generally, an agent's set of neighbors changes rapidly as a simulation proceeds through time and as the agents move through space. Depending on the topology defined for agent interactions, proximity may be defined by spatial distance for continuous space, adjacency for grid cells (as in cellular automata), or by connectivity in social networks. Identifying an agent's neighbors is a particularly time-consuming computational task and can dominate the computational effort in a simulation. Two challenges in agent simulation are (1) efficiently representing an agent's neighborhood and the neighbors in it and (2) efficiently identifying an agent's neighbors at any time in the simulation. These problems are addressed differently for different agent interaction topologies. While efficient approaches have been identified for agent neighborhood representation and neighbor identification for agents on a lattice with general neighborhood configurations, other techniques must be used when agents are able to move freely in space. Techniques for the analysis and representation of spatial data are applicable to the agent neighbor identification problem. This paper extends agent neighborhood simulation techniques from the lattice topology to continuous space, specifically R2. Algorithms based on hierarchical (quad trees) or non-hierarchical data structures (grid cells) are

  3. EXTENDING AQUATIC CLASSIFICATION TO THE LANDSCAPE SCALE HYDROLOGY-BASED STRATEGIES

    EPA Science Inventory

    Aquatic classification of single water bodies (lakes, wetlands, estuaries) is often based on geologic origin, while stream classification has relied on multiple factors related to landform, geomorphology, and soils. We have developed an approach to aquatic classification based o...

  4. Bazhenov Fm Classification Based on Wireline Logs

    NASA Astrophysics Data System (ADS)

    Simonov, D. A.; Baranov, V.; Bukhanov, N.

    2016-03-01

    This paper considers the main aspects of Bazhenov Formation interpretation and application of machine learning algorithms for the Kolpashev type section of the Bazhenov Formation, application of automatic classification algorithms that would change the scale of research from small to large. Machine learning algorithms help interpret the Bazhenov Formation in a reference well and in other wells. During this study, unsupervised and supervised machine learning algorithms were applied to interpret lithology and reservoir properties. This greatly simplifies the routine problem of manual interpretation and has an economic effect on the cost of laboratory analysis.

  5. An Immune Agent for Web-Based AI Course

    ERIC Educational Resources Information Center

    Gong, Tao; Cai, Zixing

    2006-01-01

    To overcome weakness and faults of a web-based e-learning course such as Artificial Intelligence (AI), an immune agent was proposed, simulating a natural immune mechanism against a virus. The immune agent was built on the multi-dimension education agent model and immune algorithm. The web-based AI course was comprised of many files, such as HTML…

  6. An Active Learning Exercise for Introducing Agent-Based Modeling

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  7. An Immune Agent for Web-Based AI Course

    ERIC Educational Resources Information Center

    Gong, Tao; Cai, Zixing

    2006-01-01

    To overcome weakness and faults of a web-based e-learning course such as Artificial Intelligence (AI), an immune agent was proposed, simulating a natural immune mechanism against a virus. The immune agent was built on the multi-dimension education agent model and immune algorithm. The web-based AI course was comprised of many files, such as HTML…

  8. An Active Learning Exercise for Introducing Agent-Based Modeling

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  9. Classification of Inhomogeneous Media in Tomography Based on Their Contrast

    SciTech Connect

    Anikonov, D.S.; Nazarov, V.G.

    2005-10-15

    The classification of pairs of different substances in accordance with the degree of tomographic visibility of the interfaces between these substances is considered. The classification is performed without using tomographic information and can be considered as a prediction of the quality of the subsequent reconstruction of an unknown medium. The study is based on the solution of the problem of x-ray tomography aimed at the determination of the inner structure of an unknown medium by radiation probing. The classification involves finding the contrast coefficient and studying the character of its energy dependence. The results are illustrated by plots and tomograms obtained by computer simulation.

  10. Key-phrase based classification of public health web pages.

    PubMed

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  11. Target classification algorithm based on feature aided tracking

    NASA Astrophysics Data System (ADS)

    Zhan, Ronghui; Zhang, Jun

    2013-03-01

    An effective target classification algorithm based on feature aided tracking (FAT) is proposed, using the length of target (target extent) as the classification information. To implement the algorithm, the Rao-Blackwellised unscented Kalman filter (RBUKF) is used to jointly estimate the kinematic state and target extent; meanwhile the joint probability data association (JPDA) algorithm is exploited to implement multi-target data association aided by target down-range extent. Simulation results under different condition show the presented algorithm is both accurate and robust, and it is suitable for the application of near spaced targets tracking and classification under the environment of dense clutters.

  12. Relaxation time based classification of magnetic resonance brain images

    NASA Astrophysics Data System (ADS)

    Baselice, Fabio; Ferraioli, Giampaolo; Pascazio, Vito

    2015-03-01

    Brain tissue classification in Magnetic Resonance Imaging is useful for a wide range of applications. Within this manuscript a novel approach for brain tissue joint segmentation and classification is presented. Starting from the relaxation time estimation, we propose a novel method for identifying the optimal decision regions. The approach exploits the statistical distribution of the involved signals in the complex domain. The technique, compared to classical threshold based ones, is able to improve the correct classification rate. The effectiveness of the approach is evaluated on a simulated case study.

  13. Diagnostic ECG classification based on neural networks.

    PubMed

    Bortolan, G; Willems, J L

    1993-01-01

    This study illustrates the use of the neural network approach in the problem of diagnostic classification of resting 12-lead electrocardiograms. A large electrocardiographic library (the CORDA database established at the University of Leuven, Belgium) has been utilized in this study, whose classification is validated by electrocardiographic-independent clinical data. In particular, a subset of 3,253 electrocardiographic signals with single diseases has been selected. Seven diagnostic classes have been considered: normal, left, right, and biventricular hypertrophy, and anterior, inferior, and combined myocardial infarction. The basic architecture used is a feed-forward neural network and the backpropagation algorithm for the training phase. Sensitivity, specificity, total accuracy, and partial accuracy are the indices used for testing and comparing the results with classical methodologies. In order to validate this approach, the accuracy of two statistical models (linear discriminant analysis and logistic discriminant analysis) tuned on the same dataset have been taken as the reference point. Several nets have been trained, either adjusting some components of the architecture of the networks, considering subsets and clusters of the original learning set, or combining different neural networks. The results have confirmed the potentiality and good performance of the connectionist approach when compared with classical methodologies.

  14. Wavelet-based asphalt concrete texture grading and classification

    NASA Astrophysics Data System (ADS)

    Almuntashri, Ali; Agaian, Sos

    2011-03-01

    In this Paper, we introduce a new method for evaluation, quality control, and automatic grading of texture images representing different textural classes of Asphalt Concrete (AC). Also, we present a new asphalt concrete texture grading, wavelet transform, fractal, and Support Vector Machine (SVM) based automatic classification and recognition system. Experimental results were simulated using different cross-validation techniques and achieved an average classification accuracy of 91.4.0 % in a set of 150 images belonging to five different texture grades.

  15. Improvement of unsupervised texture classification based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Togami, Yuuki; Arai, Kohei

    2004-11-01

    At the previous conference, the authors are proposed a new unsupervised texture classification method based on the genetic algorithms (GA). In the method, the GA are employed to determine location and size of the typical textures in the target image. The proposed method consists of the following procedures: 1) the determination of the number of classification category; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed; 6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. However, this method has not been automated because it requires not only target image but also the number of categories for classification. In this paper, we describe some improvement for implementation of automated texture classification. Some experiments are conducted to evaluate classification capability of the proposed method by using images from Brodatz's photo album and actual airborne multispectral scanner. The experimental results show that the proposed method can select appropriate texture samples and can provide reasonable classification results.

  16. Agent-Based Negotiation in Uncertain Environments

    NASA Astrophysics Data System (ADS)

    Debenham, John; Sierra, Carles

    An agent aims to secure his projected needs by attempting to build a set of (business) relationships with other agents. A relationship is built by exchanging private information, and is characterised by its intimacy — degree of closeness — and balance — degree of fairness. Each argumentative interaction between two agents then has two goals: to satisfy some immediate need, and to do so in a way that develops the relationship in a desired direction. An agent's desire to develop each relationship in a particular way then places constraints on the argumentative utterances. The form of negotiation described is argumentative interaction constrained by a desire to develop such relationships.

  17. An unbalanced spectra classification method based on entropy

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-bao; Zhao, Wen-juan

    2017-05-01

    How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.

  18. Tomato classification based on laser metrology and computer algorithms

    NASA Astrophysics Data System (ADS)

    Igno Rosario, Otoniel; Muñoz Rodríguez, J. Apolinar; Martínez Hernández, Haydeé P.

    2011-08-01

    An automatic technique for tomato classification is presented based on size and color. The size is determined based on surface contouring by laser line scanning. Here, a Bezier network computes the tomato height based on the line position. The tomato color is determined by CIELCH color space and the components red and green. Thus, the tomato size is classified in large, medium and small. Also, the tomato is classified into six colors associated with its maturity. The performance and accuracy of the classification system is evaluated based on methods reported in the recent years. The technique is tested and experimental results are presented.

  19. Classification of LiDAR Data with Point Based Classification Methods

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2016-06-01

    LiDAR is one of the most effective systems for 3 dimensional (3D) data collection in wide areas. Nowadays, airborne LiDAR data is used frequently in various applications such as object extraction, 3D modelling, change detection and revision of maps with increasing point density and accuracy. The classification of the LiDAR points is the first step of LiDAR data processing chain and should be handled in proper way since the 3D city modelling, building extraction, DEM generation, etc. applications directly use the classified point clouds. The different classification methods can be seen in recent researches and most of researches work with the gridded LiDAR point cloud. In grid based data processing of the LiDAR data, the characteristic point loss in the LiDAR point cloud especially vegetation and buildings or losing height accuracy during the interpolation stage are inevitable. In this case, the possible solution is the use of the raw point cloud data for classification to avoid data and accuracy loss in gridding process. In this study, the point based classification possibilities of the LiDAR point cloud is investigated to obtain more accurate classes. The automatic point based approaches, which are based on hierarchical rules, have been proposed to achieve ground, building and vegetation classes using the raw LiDAR point cloud data. In proposed approaches, every single LiDAR point is analyzed according to their features such as height, multi-return, etc. then automatically assigned to the class which they belong to. The use of un-gridded point cloud in proposed point based classification process helped the determination of more realistic rule sets. The detailed parameter analyses have been performed to obtain the most appropriate parameters in the rule sets to achieve accurate classes. The hierarchical rule sets were created for proposed Approach 1 (using selected spatial-based and echo-based features) and Approach 2 (using only selected spatial-based features

  20. Validating agent based models through virtual worlds.

    SciTech Connect

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  1. Modelling of robotic work cells using agent based-approach

    NASA Astrophysics Data System (ADS)

    Sękala, A.; Banaś, W.; Gwiazda, A.; Monica, Z.; Kost, G.; Hryniewicz, P.

    2016-08-01

    In the case of modern manufacturing systems the requirements, both according the scope and according characteristics of technical procedures are dynamically changing. This results in production system organization inability to keep up with changes in a market demand. Accordingly, there is a need for new design methods, characterized, on the one hand with a high efficiency and on the other with the adequate level of the generated organizational solutions. One of the tools that could be used for this purpose is the concept of agent systems. These systems are the tools of artificial intelligence. They allow assigning to agents the proper domains of procedures and knowledge so that they represent in a self-organizing system of an agent environment, components of a real system. The agent-based system for modelling robotic work cell should be designed taking into consideration many limitations considered with the characteristic of this production unit. It is possible to distinguish some grouped of structural components that constitute such a system. This confirms the structural complexity of a work cell as a specific production system. So it is necessary to develop agents depicting various aspects of the work cell structure. The main groups of agents that are used to model a robotic work cell should at least include next pattern representatives: machine tool agents, auxiliary equipment agents, robots agents, transport equipment agents, organizational agents as well as data and knowledge bases agents. In this way it is possible to create the holarchy of the agent-based system.

  2. Ensemble polarimetric SAR image classification based on contextual sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  3. ART-Based Neural Networks for Multi-label Classification

    NASA Astrophysics Data System (ADS)

    Sapozhnikova, Elena P.

    Multi-label classification is an active and rapidly developing research area of data analysis. It becomes increasingly important in such fields as gene function prediction, text classification or web mining. This task corresponds to classification of instances labeled by multiple classes rather than just one. Traditionally, it was solved by learning independent binary classifiers for each class and combining their outputs to obtain multi-label predictions. Alternatively, a classifier can be directly trained to predict a label set of an unknown size for each unseen instance. Recently, several direct multi-label machine learning algorithms have been proposed. This paper presents a novel approach based on ART (Adaptive Resonance Theory) neural networks. The Fuzzy ARTMAP and ARAM algorithms were modified in order to improve their multi-label classification performance and were evaluated on benchmark datasets. Comparison of experimental results with the results of other multi-label classifiers shows the effectiveness of the proposed approach.

  4. Indoor scene classification of robot vision based on cloud computing

    NASA Astrophysics Data System (ADS)

    Hu, Tao; Qi, Yuxiao; Li, Shipeng

    2016-07-01

    For intelligent service robots, indoor scene classification is an important issue. To overcome the weak real-time performance of conventional algorithms, a new method based on Cloud computing is proposed for global image features in indoor scene classification. With MapReduce method, global PHOG feature of indoor scene image is extracted in parallel. And, feature eigenvector is used to train the decision classifier through SVM concurrently. Then, the indoor scene is validly classified by decision classifier. To verify the algorithm performance, we carried out an experiment with 350 typical indoor scene images from MIT LabelMe image library. Experimental results show that the proposed algorithm can attain better real-time performance. Generally, it is 1.4 2.1 times faster than traditional classification methods which rely on single computation, while keeping stable classification correct rate as 70%.

  5. Video based object representation and classification using multiple covariance matrices

    PubMed Central

    Zhang, Yurong; Liu, Quan

    2017-01-01

    Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method. PMID:28594823

  6. Video based object representation and classification using multiple covariance matrices.

    PubMed

    Zhang, Yurong; Liu, Quan

    2017-01-01

    Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.

  7. An Agent-Based Data Mining System for Ontology Evolution

    NASA Astrophysics Data System (ADS)

    Hadzic, Maja; Dillon, Darshan

    We have developed an evidence-based mental health ontological model that represents mental health in multiple dimensions. The ongoing addition of new mental health knowledge requires a continual update of the Mental Health Ontology. In this paper, we describe how the ontology evolution can be realized using a multi-agent system in combination with data mining algorithms. We use the TICSA methodology to design this multi-agent system which is composed of four different types of agents: Information agent, Data Warehouse agent, Data Mining agents and Ontology agent. We use UML 2.1 sequence diagrams to model the collaborative nature of the agents and a UML 2.1 composite structure diagram to model the structure of individual agents. The Mental Heath Ontology has the potential to underpin various mental health research experiments of a collaborative nature which are greatly needed in times of increasing mental distress and illness.

  8. Pathological Bases for a Robust Application of Cancer Molecular Classification

    PubMed Central

    Diaz-Cano, Salvador J.

    2015-01-01

    Any robust classification system depends on its purpose and must refer to accepted standards, its strength relying on predictive values and a careful consideration of known factors that can affect its reliability. In this context, a molecular classification of human cancer must refer to the current gold standard (histological classification) and try to improve it with key prognosticators for metastatic potential, staging and grading. Although organ-specific examples have been published based on proteomics, transcriptomics and genomics evaluations, the most popular approach uses gene expression analysis as a direct correlate of cellular differentiation, which represents the key feature of the histological classification. RNA is a labile molecule that varies significantly according with the preservation protocol, its transcription reflect the adaptation of the tumor cells to the microenvironment, it can be passed through mechanisms of intercellular transference of genetic information (exosomes), and it is exposed to epigenetic modifications. More robust classifications should be based on stable molecules, at the genetic level represented by DNA to improve reliability, and its analysis must deal with the concept of intratumoral heterogeneity, which is at the origin of tumor progression and is the byproduct of the selection process during the clonal expansion and progression of neoplasms. The simultaneous analysis of multiple DNA targets and next generation sequencing offer the best practical approach for an analytical genomic classification of tumors. PMID:25898411

  9. A clinically applicable molecular-based classification for endometrial cancers.

    PubMed

    Talhouk, A; McConechy, M K; Leung, S; Li-Chang, H H; Kwon, J S; Melnyk, N; Yang, W; Senz, J; Boyd, N; Karnezis, A N; Huntsman, D G; Gilks, C B; McAlpine, J N

    2015-07-14

    Classification of endometrial carcinomas (ECs) by morphologic features is inconsistent, and yields limited prognostic and predictive information. A new system for classification based on the molecular categories identified in The Cancer Genome Atlas is proposed. Genomic data from the Cancer Genome Atlas (TCGA) support classification of endometrial carcinomas into four prognostically significant subgroups; we used the TCGA data set to develop surrogate assays that could replicate the TCGA classification, but without the need for the labor-intensive and cost-prohibitive genomic methodology. Combinations of the most relevant assays were carried forward and tested on a new independent cohort of 152 endometrial carcinoma cases, and molecular vs clinical risk group stratification was compared. Replication of TCGA survival curves was achieved with statistical significance using multiple different molecular classification models (16 total tested). Internal validation supported carrying forward a classifier based on the following components: mismatch repair protein immunohistochemistry, POLE mutational analysis and p53 immunohistochemistry as a surrogate for 'copy-number' status. The proposed molecular classifier was associated with clinical outcomes, as was stage, grade, lymph-vascular space invasion, nodal involvement and adjuvant treatment. In multivariable analysis both molecular classification and clinical risk groups were associated with outcomes, but differed greatly in composition of cases within each category, with half of POLE and mismatch repair loss subgroups residing within the clinically defined 'high-risk' group. Combining the molecular classifier with clinicopathologic features or risk groups provided the highest C-index for discrimination of outcome survival curves. Molecular classification of ECs can be achieved using clinically applicable methods on formalin-fixed paraffin-embedded samples, and provides independent prognostic information beyond

  10. Agent Based Simulation Seas Evaluation of DoDAF Architecture

    DTIC Science & Technology

    2004-03-01

    04-05 Abstract With Department of Defense (DoD) weapon systems being deeply rooted in the command, control, communications, computers, intelligence ...Communication, Computers, Intelligence , Surveillance, and Reconnaissance (C4ISR). Operational studies have shown agent based simulation utilizing this...traffic, people in crowds, artificial characters in computer games, agents in financial markets, and humans and machines on battlefields [28]. Agent

  11. Study on Increasing the Accuracy of Classification Based on Ant Colony algorithm

    NASA Astrophysics Data System (ADS)

    Yu, M.; Chen, D.-W.; Dai, C.-Y.; Li, Z.-L.

    2013-05-01

    The application for GIS advances the ability of data analysis on remote sensing image. The classification and distill of remote sensing image is the primary information source for GIS in LUCC application. How to increase the accuracy of classification is an important content of remote sensing research. Adding features and researching new classification methods are the ways to improve accuracy of classification. Ant colony algorithm based on mode framework defined, agents of the algorithms in nature-inspired computation field can show a kind of uniform intelligent computation mode. It is applied in remote sensing image classification is a new method of preliminary swarm intelligence. Studying the applicability of ant colony algorithm based on more features and exploring the advantages and performance of ant colony algorithm are provided with very important significance. The study takes the outskirts of Fuzhou with complicated land use in Fujian Province as study area. The multi-source database which contains the integration of spectral information (TM1-5, TM7, NDVI, NDBI) and topography characters (DEM, Slope, Aspect) and textural information (Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Second Moment, Correlation) were built. Classification rules based different characters are discovered from the samples through ant colony algorithm and the classification test is performed based on these rules. At the same time, we compare with traditional maximum likelihood method, C4.5 algorithm and rough sets classifications for checking over the accuracies. The study showed that the accuracy of classification based on the ant colony algorithm is higher than other methods. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using remote sensing technology based on ant colony algorithm. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using

  12. Space Situational Awareness using Market Based Agents

    NASA Astrophysics Data System (ADS)

    Sullivan, C.; Pier, E.; Gregory, S.; Bush, M.

    2012-09-01

    Space surveillance for the DoD is not limited to the Space Surveillance Network (SSN). Other DoD-owned assets have some existing capabilities for tasking but have no systematic way to work collaboratively with the SSN. These are run by diverse organizations including the Services, other defense and intelligence agencies and national laboratories. Beyond these organizations, academic and commercial entities have systems that possess SSA capability. Most all of these assets have some level of connectivity, security, and potential autonomy. Exploiting them in a mutually beneficial structure could provide a more comprehensive, efficient and cost effective solution for SSA. The collection of all potential assets, providers and consumers of SSA data comprises a market which is functionally illiquid. The development of a dynamic marketplace for SSA data could enable would-be providers the opportunity to sell data to SSA consumers for monetary or incentive based compensation. A well-conceived market architecture could drive down SSA data costs through increased supply and improve efficiency through increased competition. Oceanit will investigate market and market agent architectures, protocols, standards, and incentives toward producing high-volume/low-cost SSA.

  13. Agent Based Velocity Control of Highway Systems

    DTIC Science & Technology

    1997-09-01

    the vector of behavior functions, C" is the behavior modification function for the i-th agent, and ai is the command action issued by the i-th agent...in a Lie-Taylor series [10]. In particular, we can express the change in the behavior modification functions C" due to the flow over the interval...the model formulated in expression (13). At time t and at point p G M the behavior modification function of agent i is given by: Crip, t) = Cf (p

  14. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  15. SVM based classification of moving objects in video

    NASA Astrophysics Data System (ADS)

    Sun, Airong; Bai, Min; Tan, Yihua; Tian, Jinwen

    2009-10-01

    In this paper, a classification method of four moving objects including vehicle, human, motorcycle and bicycle in surveillance video was presented by using machine learning idea. The method can be described as three steps: feature selection, training of Support Vector Machine(SVM) classifier and performance evaluation. Firstly, a feature vector to represent the discriminabilty of an object is described. From the profile of object, the ratio of width to height and trisection ratio of width to height are firstly adopted as the distinct feature. Moreover, we use external rectangle to approximate the object mask, which leads to a feature of rectangle degree standing for the ratio between the area of object to the area of external rectangle. To cope with the invariance to scale, rotation and so on, Hu moment invariants, Fourier descriptor and dispersedness were extracted as another kind of features. Secondly, a multi-class classifier were designed based on two-class SVM. The idea behind the classifier structure is that the multi-class classification can be converted to the combination of two-class classification. For our case, the final classification is the vote result of six twoclass classifier. Thirdly, we determine the precise feature selection by experiments. According to the classification result, we select different features for each two-class classifier. The true positive rate, false positive rate and discriminative index are taken to evaluate the performance of the classifier. Experimental results show that the classifier achieves good classification precision for the real and test data.

  16. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  17. Atmospheric circulation classification comparison based on wildfires in Portugal

    NASA Astrophysics Data System (ADS)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  18. A new climate classification based on Markov chain analysis

    NASA Astrophysics Data System (ADS)

    Mieruch, S.; Noël, S.; Bovensmann, H.; Burrows, J. P.; Freund, J. A.

    2009-12-01

    Existing climate classifications comprise the genetic classification, which is based on climate genesis factors such as winds and oceanic and continental climate, and the empiric classification based on temperature, precipitation, vegetation and more. We present a novel method for climate classification that is based on dynamic climate descriptors, which are persistence, recurrence time and entropy and coin a dynamic classification of climate. These descriptors are derived from a coarse-grained categorical representation of multivariate time series and a subsequent Markov chain analysis. They are useful for a comparative analysis of different climate regions and, in the context of global climate change, for a regime shift analysis. We apply the method to the bivariate set of water vapor and temperature of two regional climates, the Iberian Peninsula and the islands of Hawaii in the central Pacific Ocean. Through the Markov chain analysis and via the derived descriptors we are able to quantify significant differences between the two climate regions. We discuss how these descriptors reflect properties such as climate stability, rate of changes and short term predictability.

  19. EEG sensor based classification for assessing psychological stress.

    PubMed

    Begum, Shahina; Barua, Shaibal

    2013-01-01

    Electroencephalogram (EEG) reflects the brain activity and is widely used in biomedical research. However, analysis of this signal is still a challenging issue. This paper presents a hybrid approach for assessing stress using the EEG signal. It applies Multivariate Multi-scale Entropy Analysis (MMSE) for the data level fusion. Case-based reasoning is used for the classification tasks. Our preliminary result indicates that EEG sensor based classification could be an efficient technique for evaluation of the psychological state of individuals. Thus, the system can be used for personal health monitoring in order to improve users health.

  20. Single image super-resolution based on image patch classification

    NASA Astrophysics Data System (ADS)

    Xia, Ping; Yan, Hua; Li, Jing; Sun, Jiande

    2017-06-01

    This paper proposed a single image super-resolution algorithm based on image patch classification and sparse representation where gradient information is used to classify image patches into three different classes in order to reflect the difference between the different types of image patches. Compared with other classification algorithms, gradient information based algorithm is simpler and more effective. In this paper, each class is learned to get a corresponding sub-dictionary. High-resolution image patch can be reconstructed by the dictionary and sparse representation coefficients of corresponding class of image patches. The result of the experiments demonstrated that the proposed algorithm has a better effect compared with the other algorithms.

  1. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  2. Rotation invariant texture classification based on Gabor wavelets

    NASA Astrophysics Data System (ADS)

    Xie, Xudong; Lu, Jianhua; Gong, Jie; Zhang, Ning

    2007-11-01

    In this paper, an efficient rotation invariant texture classification method is proposed. Comparing with the previous texture classification method, which is also based on Gabor wavelets, two modifications are made in this paper. Firstly, an adaptive circular orientation normalization scheme is proposed. Because both the effects of orientation and frequency to Gabor features are considered, our method can effectively eliminate the disturbance from inter-frequency, and therefore has the ability to reduce the effect of image rotation. Secondly, besides the Gabor features, which mainly represent the local texture information of an image, the statistical property of the intensity values of an image is also used for texture classification in our algorithm. Our method is evaluated based on the Brodatz album, and the experimental results show that it outperforms the traditional algorithms.

  3. Agent Persuasion Mechanism of Acquaintance

    NASA Astrophysics Data System (ADS)

    Jinghua, Wu; Wenguang, Lu; Hailiang, Meng

    Agent persuasion can improve negotiation efficiency in dynamic environment based on its initiative and autonomy, and etc., which is being affected much more by acquaintance. Classification of acquaintance on agent persuasion is illustrated, and the agent persuasion model of acquaintance is also illustrated. Then the concept of agent persuasion degree of acquaintance is given. Finally, relative interactive mechanism is elaborated.

  4. Agent-based model of soil water dynamics

    NASA Astrophysics Data System (ADS)

    Mewes, Benjamin; Schumann, Andreas

    2017-04-01

    In the last decade, agent based modelling became more and more popular in social science, biology and environmental modelling. The concept is designed to simulate systems that are highly dynamic and sensitive to small variations in their composition and their state. As hydrological systems often show dynamic and nonlinear behaviour, agent based modelling can be an adequate way to model aquatic systems. Nevertheless, up to now only a few results on agent based modelling are known in hydrology. Processes like the percolation of water through the soil are highly responsive to the state of the pedological system. To simulate these water fluxes correctly by known approaches like the Green-Ampt model or approximations to the Richards equation, small time steps and a high spatial discretisation are needed. In this study a new approach for modelling water fluxes in a soil column is presented: autonomous water agents that transport water through the soil while interacting with their environment as well as with other agents under physical laws. Setting up an agent-based model requires a predefined rule set for the behaviour of the autonomous agents. Moreover, we present some principle assumptions of the interaction not only between agents, but as well between agents and their environment. Our study shows that agent-based modelling in hydrology leads to very promising results but we also have to face new challenges.

  5. Multiclass cancer classification based on gene expression comparison

    PubMed Central

    Yang, Sitan; Naiman, Daniel Q.

    2016-01-01

    As the complexity and heterogeneity of cancer is being increasingly appreciated through genomic analyses, microarray-based cancer classification comprising multiple discriminatory molecular markers is an emerging trend. Such multiclass classification problems pose new methodological and computational challenges for developing novel and effective statistical approaches. In this paper, we introduce a new approach for classifying multiple disease states associated with cancer based on gene expression profiles. Our method focuses on detecting small sets of genes in which the relative comparison of their expression values leads to class discrimination. For an m-class problem, the classification rule typically depends on a small number of m-gene sets, which provide transparent decision boundaries and allow for potential biological interpretations. We first test our approach on seven common gene expression datasets and compare it with popular classification methods including support vector machines and random forests. We then consider an extremely large cohort of leukemia cancer to further assess its effectiveness. In both experiments, our method yields comparable or even better results to benchmark classifiers. In addition, we demonstrate that our approach can integrate pathway analysis of gene expression to provide accurate and biological meaningful classification. PMID:24918456

  6. Internet-enabled collaborative agent-based supply chains

    NASA Astrophysics Data System (ADS)

    Shen, Weiming; Kremer, Rob; Norrie, Douglas H.

    2000-12-01

    This paper presents some results of our recent research work related to the development of a new Collaborative Agent System Architecture (CASA) and an Infrastructure for Collaborative Agent Systems (ICAS). Initially being proposed as a general architecture for Internet based collaborative agent systems (particularly complex industrial collaborative agent systems), the proposed architecture is very suitable for managing the Internet enabled complex supply chain for a large manufacturing enterprise. The general collaborative agent system architecture with the basic communication and cooperation services, domain independent components, prototypes and mechanisms are described. Benefits of implementing Internet enabled supply chains with the proposed infrastructure are discussed. A case study on Internet enabled supply chain management is presented.

  7. Assistive technology device classification based upon the World Health Organization's, International Classification of Functioning, Disability and Health (ICF).

    PubMed

    Bauer, Stephen M; Elsaesser, Linda-Jeanne; Arthanat, Sajay

    2011-01-01

    To develop an assistive technology device classification (ATDC) consistent with the Assistive Technology Act (ATA2004), Americans with Disabilities Act (ADA2008), International Classification System of Functioning, Disability and Health (ICF), International Classification of Disease, Ninth Revision-Clinical Modification (ICD-9-CM) and American Medical Association's Current Procedural Terminology (CPT). Current assistive technology device (ATD) classifications include: the National Classification System for Assistive Technology Devices and ATSs (RTI/NCS) published in 2000; ISO 9999: technical aids for persons with disabilities - classification and terminology (ISO 9999) published in 1992, 1998, 2002 and 2007 and ICF-based AT classification (ICF/AT2007) published in 2009. To derive 'requirements' for ATD classification from the ATA2004, ADA2008, ICF, ICD-9-CM and CPT. Review the ATD classifications and online databases against requirements. Construct the ATDC to be consistent with all requirements and demonstrate with examples. Existing ATD classifications and online databases are inconsistent with requirements. The ATDC is consistent and has inclusion and exclusion criteria, classification rules, employs ICF coding, extendable hierarchy and language and uses standard device naming conventions. Conclusion. The ATDC has broad application to: provision of AT ATSs (ATSs), characterisation and analysis of AT industries, Federally sponsored research pertaining to AT development and commercialisation, and Federal health insurance scope of benefits.

  8. Improvement of Bioactive Compound Classification through Integration of Orthogonal Cell-Based Biosensing Methods

    PubMed Central

    Chaplen, Frank W. R.; Vissvesvaran, Ganesh; Henry, Eric C.; Jovanovic, Goran N.

    2007-01-01

    Lack of specificity for different classes of chemical and biological agents, and false positives and negatives, can limit the range of applications for cell-based biosensors. This study suggests that the integration of results from algal cells (Mesotaenium caldariorum) and fish chromatophores (Betta splendens) improves classification efficiency and detection reliability. Cells were challenged with paraquat, mercuric chloride, sodium arsenite and clonidine. The two detection systems were independently investigated for classification of the toxin set by performing discriminant analysis. The algal system correctly classified 72% of the bioactive compounds, whereas the fish chromatophore system correctly classified 68%. The combined classification efficiency was 95%. The algal sensor readout is based on fluorescence measurements of changes in the energy producing pathways of photosynthetic cells, whereas the response from fish chromatophores was quantified using optical density. Change in optical density reflects interference with the functioning of cellular signal transduction networks. Thus, algal cells and fish chromatophores respond to the challenge agents through sufficiently different mechanisms of action to be considered orthogonal.

  9. A hybrid agent-based approach for modeling microbiological systems.

    PubMed

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  10. Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems

    ERIC Educational Resources Information Center

    Gifford, Christopher M.

    2009-01-01

    This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…

  11. Detection/classification/quantification of chemical agents using an array of surface acoustic wave (SAW) devices

    NASA Astrophysics Data System (ADS)

    Milner, G. Martin

    2005-05-01

    ChemSentry is a portable system used to detect, identify, and quantify chemical warfare (CW) agents. Electro chemical (EC) cell sensor technology is used for blood agents and an array of surface acoustic wave (SAW) sensors is used for nerve and blister agents. The combination of the EC cell and the SAW array provides sufficient sensor information to detect, classify and quantify all CW agents of concern using smaller, lighter, lower cost units. Initial development of the SAW array and processing was a key challenge for ChemSentry requiring several years of fundamental testing of polymers and coating methods to finalize the sensor array design in 2001. Following the finalization of the SAW array, nearly three (3) years of intensive testing in both laboratory and field environments were required in order to gather sufficient data to fully understand the response characteristics. Virtually unbounded permutations of agent characteristics and environmental characteristics must be considered in order to operate against all agents and all environments of interest to the U.S. military and other potential users of ChemSentry. The resulting signal processing design matched to this extensive body of measured data (over 8,000 agent challenges and 10,000 hours of ambient data) is considered to be a significant advance in state-of-the-art for CW agent detection.

  12. Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems

    ERIC Educational Resources Information Center

    Gifford, Christopher M.

    2009-01-01

    This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…

  13. Competency Based Curriculum for Real Estate Agent.

    ERIC Educational Resources Information Center

    McCloy, Robert J.

    This publication is a curriculum and teaching guide for preparing real estate agents in the state of West Virginia. The guide contains 30 units, or lessons. Each lesson is designed to cover three to five hours of instruction time. Competencies provided for each lesson are stated in terms of what the student should be able to do as a result of the…

  14. Competency Based Curriculum for Real Estate Agent.

    ERIC Educational Resources Information Center

    McCloy, Robert J.

    This publication is a curriculum and teaching guide for preparing real estate agents in the state of West Virginia. The guide contains 30 units, or lessons. Each lesson is designed to cover three to five hours of instruction time. Competencies provided for each lesson are stated in terms of what the student should be able to do as a result of the…

  15. Impact of Information based Classification on Network Epidemics

    PubMed Central

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-01-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348

  16. Accuracy and efficiency of area classifications based on tree tally

    Treesearch

    Michael S. Williams; Hans T. Schreuder; Raymond L. Czaplewski

    2001-01-01

    Inventory data are often used to estimate the area of the land base that is classified as a specific condition class. Examples include areas classified as old-growth forest, private ownership, or suitable habitat for a given species. Many inventory programs rely on classification algorithms of varying complexity to determine condition class. These algorithms can be...

  17. Impact of Information based Classification on Network Epidemics

    NASA Astrophysics Data System (ADS)

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-06-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results.

  18. An Extension Dynamic Model Based on BDI Agent

    NASA Astrophysics Data System (ADS)

    Yu, Wang; Feng, Zhu; Hua, Geng; WangJing, Zhu

    this paper's researching is based on the model of BDI Agent. Firstly, This paper analyze the deficiencies of the traditional BDI Agent model, Then propose an extension dynamic model of BDI Agent based on the traditional ones. It can quickly achieve the internal interaction of the tradition model of BDI Agent, deal with complex issues under dynamic and open environment and achieve quick reaction of the model. The new model is a natural and reasonable model by verifying the origin of civilization using the model of monkeys to eat sweet potato based on the design of the extension dynamic model. It is verified to be feasible by comparing the extended dynamic BDI Agent model with the traditional BDI Agent Model uses the SWARM, it has important theoretical significance.

  19. Classification.

    PubMed

    Tuxhorn, Ingrid; Kotagal, Prakash

    2008-07-01

    In this article, we review the practical approach and diagnostic relevance of current seizure and epilepsy classification concepts and principles as a basic framework for good management of patients with epileptic seizures and epilepsy. Inaccurate generalizations about terminology, diagnosis, and treatment may be the single most important factor, next to an inadequately obtained history, that determines the misdiagnosis and mismanagement of patients with epilepsy. A stepwise signs and symptoms approach for diagnosis, evaluation, and management along the guidelines of the International League Against Epilepsy and definitions of epileptic seizures and epilepsy syndromes offers a state-of-the-art clinical approach to managing patients with epilepsy.

  20. Classification of CT-brain slices based on local histograms

    NASA Astrophysics Data System (ADS)

    Avrunin, Oleg G.; Tymkovych, Maksym Y.; Pavlov, Sergii V.; Timchik, Sergii V.; Kisała, Piotr; Orakbaev, Yerbol

    2015-12-01

    Neurosurgical intervention is a very complicated process. Modern operating procedures based on data such as CT, MRI, etc. Automated analysis of these data is an important task for researchers. Some modern methods of brain-slice segmentation use additional data to process these images. Classification can be used to obtain this information. To classify the CT images of the brain, we suggest using local histogram and features extracted from them. The paper shows the process of feature extraction and classification CT-slices of the brain. The process of feature extraction is specialized for axial cross-section of the brain. The work can be applied to medical neurosurgical systems.

  1. Airborne LIDAR Points Classification Based on Tensor Sparse Representation

    NASA Astrophysics Data System (ADS)

    Li, N.; Pfeifer, N.; Liu, C.

    2017-09-01

    The common statistical methods for supervised classification usually require a large amount of training data to achieve reasonable results, which is time consuming and inefficient. This paper proposes a tensor sparse representation classification (SRC) method for airborne LiDAR points. The LiDAR points are represented as tensors to keep attributes in its spatial space. Then only a few of training data is used for dictionary learning, and the sparse tensor is calculated based on tensor OMP algorithm. The point label is determined by the minimal reconstruction residuals. Experiments are carried out on real LiDAR points whose result shows that objects can be distinguished by this algorithm successfully.

  2. Adaptive color correction based on object color classification

    NASA Astrophysics Data System (ADS)

    Kotera, Hiroaki; Morimoto, Tetsuro; Yasue, Nobuyuki; Saito, Ryoichi

    1998-09-01

    An adaptive color management strategy depending on the image contents is proposed. Pictorial color image is classified into different object areas with clustered color distribution. Euclidian or Mahalanobis color distance measures, and maximum likelihood method based on Bayesian decision rule, are introduced to the classification. After the classification process, each clustered pixels are projected onto principal component space by Hotelling transform and the color corrections are performed for the principal components to be matched each other in between the individual clustered color areas of original and printed images.

  3. An Agent-Based Cockpit Task Management System

    NASA Technical Reports Server (NTRS)

    Funk, Ken

    1997-01-01

    An agent-based program to facilitate Cockpit Task Management (CTM) in commercial transport aircraft is developed and evaluated. The agent-based program called the AgendaManager (AMgr) is described and evaluated in a part-task simulator study using airline pilots.

  4. Adaptive stellar spectral subclass classification based on Bayesian SVMs

    NASA Astrophysics Data System (ADS)

    Du, Changde; Luo, Ali; Yang, Haifeng

    2017-02-01

    Stellar spectral classification is one of the most fundamental tasks in survey astronomy. Many automated classification methods have been applied to spectral data. However, their main limitation is that the model parameters must be tuned repeatedly to deal with different data sets. In this paper, we utilize the Bayesian support vector machines (BSVM) to classify the spectral subclass data. Based on Gibbs sampling, BSVM can infer all model parameters adaptively according to different data sets, which allows us to circumvent the time-consuming cross validation for penalty parameter. We explored different normalization methods for stellar spectral data, and the best one has been suggested in this study. Finally, experimental results on several stellar spectral subclass classification problems show that the BSVM model not only possesses good adaptability but also provides better prediction performance than traditional methods.

  5. Hyperspectral image classification based on volumetric texture and dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Su, Hongjun; Sheng, Yehua; Du, Peijun; Chen, Chen; Liu, Kui

    2015-06-01

    A novel approach using volumetric texture and reduced-spectral features is presented for hyperspectral image classification. Using this approach, the volumetric textural features were extracted by volumetric gray-level co-occurrence matrices (VGLCM). The spectral features were extracted by minimum estimated abundance covariance (MEAC) and linear prediction (LP)-based band selection, and a semi-supervised k-means (SKM) clustering method with deleting the worst cluster (SKMd) bandclustering algorithms. Moreover, four feature combination schemes were designed for hyperspectral image classification by using spectral and textural features. It has been proven that the proposed method using VGLCM outperforms the gray-level co-occurrence matrices (GLCM) method, and the experimental results indicate that the combination of spectral information with volumetric textural features leads to an improved classification performance in hyperspectral imagery.

  6. Risk-based Classification of Incidents

    NASA Technical Reports Server (NTRS)

    Greenwell, William S.; Knight, John C.; Strunk, Elisabeth A.

    2003-01-01

    As the penetration of software into safety-critical systems progresses, accidents and incidents involving software will inevitably become more frequent. Identifying lessons from these occurrences and applying them to existing and future systems is essential if recurrences are to be prevented. Unfortunately, investigative agencies do not have the resources to fully investigate every incident under their jurisdictions and domains of expertise and thus must prioritize certain occurrences when allocating investigative resources. In the aviation community, most investigative agencies prioritize occurrences based on the severity of their associated losses, allocating more resources to accidents resulting in injury to passengers or extensive aircraft damage. We argue that this scheme is inappropriate because it undervalues incidents whose recurrence could have a high potential for loss while overvaluing fairly straightforward accidents involving accepted risks. We then suggest a new strategy for prioritizing occurrences based on the risk arising from incident recurrence.

  7. Effect of Pansharpened Image on Some of Pixel Based and Object Based Classification Accuracy

    NASA Astrophysics Data System (ADS)

    Karakus, P.; Karabork, H.

    2016-06-01

    Classification is the most important method to determine type of crop contained in a region for agricultural planning. There are two types of the classification. First is pixel based and the other is object based classification method. While pixel based classification methods are based on the information in each pixel, object based classification method is based on objects or image objects that formed by the combination of information from a set of similar pixels. Multispectral image contains a higher degree of spectral resolution than a panchromatic image. Panchromatic image have a higher spatial resolution than a multispectral image. Pan sharpening is a process of merging high spatial resolution panchromatic and high spectral resolution multispectral imagery to create a single high resolution color image. The aim of the study was to compare the potential classification accuracy provided by pan sharpened image. In this study, SPOT 5 image was used dated April 2013. 5m panchromatic image and 10m multispectral image are pan sharpened. Four different classification methods were investigated: maximum likelihood, decision tree, support vector machine at the pixel level and object based classification methods. SPOT 5 pan sharpened image was used to classification sun flowers and corn in a study site located at Kadirli region on Osmaniye in Turkey. The effects of pan sharpened image on classification results were also examined. Accuracy assessment showed that the object based classification resulted in the better overall accuracy values than the others. The results that indicate that these classification methods can be used for identifying sun flower and corn and estimating crop areas.

  8. Energy-efficiency based classification of the manufacturing workstation

    NASA Astrophysics Data System (ADS)

    Frumuşanu, G.; Afteni, C.; Badea, N.; Epureanu, A.

    2017-08-01

    EU Directive 92/75/EC established for the first time an energy consumption labelling scheme, further implemented by several other directives. As consequence, nowadays many products (e.g. home appliances, tyres, light bulbs, houses) have an EU Energy Label when offered for sale or rent. Several energy consumption models of manufacturing equipments have been also developed. This paper proposes an energy efficiency - based classification of the manufacturing workstation, aiming to characterize its energetic behaviour. The concept of energy efficiency of the manufacturing workstation is defined. On this base, a classification methodology has been developed. It refers to specific criteria and their evaluation modalities, together to the definition & delimitation of energy efficiency classes. The energy class position is defined after the amount of energy needed by the workstation in the middle point of its operating domain, while its extension is determined by the value of the first coefficient from the Taylor series that approximates the dependence between the energy consume and the chosen parameter of the working regime. The main domain of interest for this classification looks to be the optimization of the manufacturing activities planning and programming. A case-study regarding an actual lathe classification from energy efficiency point of view, based on two different approaches (analytical and numerical) is also included.

  9. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  10. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  11. Fast rule-based bioactivity prediction using associative classification mining

    PubMed Central

    2012-01-01

    Relating chemical features to bioactivities is critical in molecular design and is used extensively in the lead discovery and optimization process. A variety of techniques from statistics, data mining and machine learning have been applied to this process. In this study, we utilize a collection of methods, called associative classification mining (ACM), which are popular in the data mining community, but so far have not been applied widely in cheminformatics. More specifically, classification based on predictive association rules (CPAR), classification based on multiple association rules (CMAR) and classification based on association rules (CBA) are employed on three datasets using various descriptor sets. Experimental evaluations on anti-tuberculosis (antiTB), mutagenicity and hERG (the human Ether-a-go-go-Related Gene) blocker datasets show that these three methods are computationally scalable and appropriate for high speed mining. Additionally, they provide comparable accuracy and efficiency to the commonly used Bayesian and support vector machines (SVM) methods, and produce highly interpretable models. PMID:23176548

  12. Measuring Communication Heterogeneity Between Multiple Web-Based Agents

    NASA Astrophysics Data System (ADS)

    Bravo, Maricela; Coronel, Martha

    Communication between multiple agents is essential to achieve cooperation, negotiation, and take decisions for mutual benefit. Nowadays there is a growing interest in automating communication processes between different agents in dynamic web-based environments. However, when agents are deployed and integrated in open and dynamic environments, detailed syntax and semantics of their particular language implementations differ, causing the problem of communication heterogeneity. Therefore, it is necessary to measure heterogeneity among all participating agents and the number of required translations when heterogeneous agents are involved in communications. In this chapter we present a set of measures with the objective to evaluate the minimal computational requirements before implementing a translation approach. Our measures are based on set theory, which has proved to be a good representation formalism in other areas. We showed how to use the set of measures for two highly heterogeneous set of agents.

  13. Networks based on collisions among mobile agents

    NASA Astrophysics Data System (ADS)

    González, Marta C.; Lind, Pedro G.; Herrmann, Hans J.

    2006-12-01

    We investigate in detail a recent model of colliding mobile agents [M.C. González, P.G. Lind, H.J. Herrmann, Phys. Rev. Lett. 96 (2006) 088702. cond-mat/0602091], used as an alternative approach for constructing evolving networks of interactions formed by collisions governed by suitable dynamical rules. The system of mobile agents evolves towards a quasi-stationary state which is, apart from small fluctuations, well characterized by the density of the system and the residence time of the agents. The residence time defines a collision rate, and by varying this collision rate, the system percolates at a critical value, with the emergence of a giant cluster whose critical exponents are the ones of two-dimensional percolation. Further, the degree and clustering coefficient distributions, and the average path length, show that the network associated with such a system presents non-trivial features which, depending on the collision rules, enables one not only to recover the main properties of standard networks, such as exponential, random and scale-free networks, but also to obtain other topological structures. To illustrate, we show a specific example where the obtained structure has topological features which characterize the structure and evolution of social networks accurately in different contexts, ranging from networks of acquaintances to networks of sexual contacts.

  14. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    NASA Astrophysics Data System (ADS)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

  15. A new classification of resin-based aesthetic adhesive materials.

    PubMed

    Ardu, Stefano; Braut, Vedrana; Uhac, Ivone; Benbachir, Nacer; Feilzer, Albert J; Krejci, Ivo

    2010-09-01

    The purpose of this article is to illustrate a new classification of resin based aesthetic materials laying on the characterization of their matrix and their filler morphology. Four samples per material have been prepared for SEM evaluation. Each sample has been treated with chloroform to dissolve its matrix in order to evidence the filler morphology. A general schema of four different matrix systems which characterize the material's level of hydrophobicity can be put in evidence. The subsequent filler analysis individuates a more complex schema based on filler size and construction. A new classification based on matrix nature and filler morphology has been proposed. Based on this concept mechanical and aesthetic characteristics of the materials can be presumed.

  16. Dynamic Exploration of Helicopter Reconnaissance Through Agent-Based Modeling

    DTIC Science & Technology

    2000-09-01

    Multi - Agent System modeling to develop a simulation of tactical helicopter performance while conducting armed reconnaissance. It focuses on creating a model to support planning for the Test and Evaluation phas of the Comanche helicopter acquisition cycle. The model serves as an initial simulation laboratory for scenario planning, requirements forecasting, and platform comparison analyses. The model implements adaptive tactical movement with agent sensory and weaponry system characteristics. Agents are able to determine their movement direction and paths based on

  17. A Visual mining based framework for classification accuracy estimation

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal Vijayakumar

    2013-12-01

    Classification techniques have been widely used in different remote sensing applications and correct classification of mixed pixels is a tedious task. Traditional approaches adopt various statistical parameters, however does not facilitate effective visualisation. Data mining tools are proving very helpful in the classification process. We propose a visual mining based frame work for accuracy assessment of classification techniques using open source tools such as WEKA and PREFUSE. These tools in integration can provide an efficient approach for getting information about improvements in the classification accuracy and helps in refining training data set. We have illustrated framework for investigating the effects of various resampling methods on classification accuracy and found that bilinear (BL) is best suited for preserving radiometric characteristics. We have also investigated the optimal number of folds required for effective analysis of LISS-IV images. Techniki klasyfikacji są szeroko wykorzystywane w różnych aplikacjach teledetekcyjnych, w których poprawna klasyfikacja pikseli stanowi poważne wyzwanie. Podejście tradycyjne wykorzystujące różnego rodzaju parametry statystyczne nie zapewnia efektywnej wizualizacji. Wielce obiecujące wydaje się zastosowanie do klasyfikacji narzędzi do eksploracji danych. W artykule zaproponowano podejście bazujące na wizualnej analizie eksploracyjnej, wykorzystujące takie narzędzia typu open source jak WEKA i PREFUSE. Wymienione narzędzia ułatwiają korektę pół treningowych i efektywnie wspomagają poprawę dokładności klasyfikacji. Działanie metody sprawdzono wykorzystując wpływ różnych metod resampling na zachowanie dokładności radiometrycznej i uzyskując najlepsze wyniki dla metody bilinearnej (BL).

  18. Subspace-based prototyping and classification of chromosome images.

    PubMed

    Wu, Qiang; Liu, Zhongmin; Chen, Tiehan; Xiong, Zixiang; Castleman, Kenneth R

    2005-09-01

    Chromosomes are essential genomic information carriers. Chromosome classification constitutes an important part of routine clinical and cancer cytogenetics analysis. Cytogeneticists perform visual interpretation of banded chromosome images according to the diagrammatic models of various chromosome types known as the ideograms, which mimic artists' depiction of the chromosomes. In this paper, we present a subspace-based approach for automated prototyping and classification of chromosome images. We show that 1) prototype chromosome images can be quantitatively synthesized from a subspace to objectively represent the chromosome images of a given type or population, and 2) the transformation coefficients (or projected coordinate values of sample chromosomes) in the subspace can be utilized as the extracted feature measurements for classification purposes. We examine in particular the formation of three well-known subspaces, namely the ones derived from principal component analysis (PCA), Fisher's linear discriminant analysis, and the discrete cosine transform (DCT). These subspaces are implemented and evaluated for prototyping two-dimensional (2-D) images and for classification of both 2-D images and one-dimensional profiles of chromosomes. Experimental results show that previously unseen prototype chromosome images of high visual quality can be synthesized using the proposed subspace-based method, and that PCA and the DCT significantly outperform the well-known benchmark technique of weighted density distribution functions in classifying 2-D chromosome images.

  19. Epileptic EEG classification based on kernel sparse representation.

    PubMed

    Yuan, Qi; Zhou, Weidong; Yuan, Shasha; Li, Xueli; Wang, Jiwen; Jia, Guijuan

    2014-06-01

    The automatic identification of epileptic EEG signals is significant in both relieving heavy workload of visual inspection of EEG recordings and treatment of epilepsy. This paper presents a novel method based on the theory of sparse representation to identify epileptic EEGs. At first, the raw EEG epochs are preprocessed via Gaussian low pass filtering and differential operation. Then, in the scheme of sparse representation based classification (SRC), a test EEG sample is sparsely represented on the training set by solving l1-minimization problem, and the represented residuals associated with ictal and interictal training samples are computed. The test EEG sample is categorized as the class that yields the minimum represented residual. So unlike the conventional EEG classification methods, the choice and calculation of EEG features are avoided in the proposed framework. Moreover, the kernel trick is employed to generate a kernel version of the SRC method for improving the separability between ictal and interictal classes. The satisfactory recognition accuracy of 98.63% for ictal and interictal EEG classification and for ictal and normal EEG classification has been achieved by the kernel SRC. In addition, the fast speed makes the kernel SRC suit for the real-time seizure monitoring application in the near future.

  20. Erythrocyte shape classification using integral-geometry-based methods.

    PubMed

    Gual-Arnau, X; Herold-García, S; Simó, A

    2015-07-01

    Erythrocyte shape deformations are related to different important illnesses. In this paper, we focus on one of the most important: the Sickle cell disease. This disease causes the hardening or polymerization of the hemoglobin that contains the erythrocytes. The study of this process using digital images of peripheral blood smears can offer useful results in the clinical diagnosis of these illnesses. In particular, it would be very valuable to find a rapid and reproducible automatic classification method to quantify the number of deformed cells and so gauge the severity of the illness. In this paper, we show the good results obtained in the automatic classification of erythrocytes in normal cells, sickle cells, and cells with other deformations, when we use a set of functions based on integral-geometry methods, an active contour-based segmentation method, and a k-NN classification algorithm. Blood specimens were obtained from patients with Sickle cell disease. Seventeen peripheral blood smears were obtained for the study, and 45 images of different fields were obtained. A specialist selected the cells to use, determining those cells which were normal, elongated, and with other deformations present in the images. A process of automatic classification, with cross-validation of errors with the proposed descriptors and with other two functions used in previous studies, was realized.

  1. Tutorial on agent-based modeling and simulation. Part 2 : how to model with agents.

    SciTech Connect

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2006-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of interacting autonomous agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to do research. Some have gone so far as to contend that ABMS is a new way of doing science. Computational advances make possible a growing number of agent-based applications across many fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling the growth and decline of ancient civilizations to modeling the complexities of the human immune system, and many more. This tutorial describes the foundations of ABMS, identifies ABMS toolkits and development methods illustrated through a supply chain example, and provides thoughts on the appropriate contexts for ABMS versus conventional modeling techniques.

  2. Segmentation Based Fuzzy Classification of High Resolution Images

    NASA Astrophysics Data System (ADS)

    Rao, Mukund; Rao, Suryaprakash; Masser, Ian; Kasturirangan, K.

    Information extraction from satellite images is the process of delineation of entities in the image which pertain to some feature on the earth and to which on associating an attribute, a classification of the image is obtained. Classification is a common technique to extract information from remote sensing data and, by and large, the common classification techniques mainly exploit the spectral characteristics of remote sensing images and attempt to detect patterns in spectral information to classify images. These are based on a per-pixel analysis of the spectral information, "clustering" or "grouping" of pixels is done to generate meaningful thematic information. Most of the classification techniques apply statistical pattern recognition of image spectral vectors to "label" each pixel with appropriate class information from a set of training information. On the other hand, Segmentation is not new, but it is yet seldom used in image processing of remotely sensed data. Although there has been a lot of development in segmentation of grey tone images in this field and other fields, like robotic vision, there has been little progress in segmentation of colour or multi-band imagery. Especially within the last two years many new segmentation algorithms as well as applications were developed, but not all of them lead to qualitatively convincing results while being robust and operational. One reason is that the segmentation of an image into a given number of regions is a problem with a huge number of possible solutions. Newer algorithms based on fractal approach could eventually revolutionize image processing of remotely sensed data. The paper looks at applying spatial concepts to image processing, paving the way to algorithmically formulate some more advanced aspects of cognition and inference. In GIS-based spatial analysis, vector-based tools already have been able to support advanced tasks generating new knowledge. By identifying objects (as segmentation results) from

  3. Object-Based Classification and Change Detection of Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Park, J. G.; Harada, I.; Kwak, Y.

    2016-06-01

    Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.

  4. Lung nodule classification with multilevel patch-based context analysis.

    PubMed

    Zhang, Fan; Song, Yang; Cai, Weidong; Lee, Min-Zhao; Zhou, Yun; Huang, Heng; Shan, Shimin; Fulham, Michael J; Feng, Dagan D

    2014-04-01

    In this paper, we propose a novel classification method for the four types of lung nodules, i.e., well-circumscribed, vascularized, juxta-pleural, and pleural-tail, in low dose computed tomography scans. The proposed method is based on contextual analysis by combining the lung nodule and surrounding anatomical structures, and has three main stages: an adaptive patch-based division is used to construct concentric multilevel partition; then, a new feature set is designed to incorporate intensity, texture, and gradient information for image patch feature description, and then a contextual latent semantic analysis-based classifier is designed to calculate the probabilistic estimations for the relevant images. Our proposed method was evaluated on a publicly available dataset and clearly demonstrated promising classification performance.

  5. A Classification of Mediterranean Cyclones Based on Global Analyses

    NASA Technical Reports Server (NTRS)

    Reale, Oreste; Atlas, Robert

    2003-01-01

    The Mediterranean Sea region is dominated by baroclinic and orographic cyclogenesis. However, previous work has demonstrated the existence of rare but intense subsynoptic-scale cyclones displaying remarkable similarities to tropical cyclones and polar lows, including, but not limited to, an eye-like feature in the satellite imagery. The terms polar low and tropical cyclone have been often used interchangeably when referring to small-scale, convective Mediterranean vortices and no definitive statement has been made so far on their nature, be it sub-tropical or polar. Moreover, most of the classifications of Mediterranean cyclones have neglected the small-scale convective vortices, focusing only on the larger-scale and far more common baroclinic cyclones. A classification of all Mediterranean cyclones based on operational global analyses is proposed The classification is based on normalized horizontal shear, vertical shear, scale, low versus mid-level vorticity, low-level temperature gradients, and sea surface temperatures. In the classification system there is a continuum of possible events, according to the increasing role of barotropic instability and decreasing role of baroclinic instability. One of the main results is that the Mediterranean tropical cyclone-like vortices and the Mediterranean polar lows appear to be different types of events, in spite of the apparent similarity of their satellite imagery. A consistent terminology is adopted, stating that tropical cyclone- like vortices are the less baroclinic of all, followed by polar lows, cold small-scale cyclones and finally baroclinic lee cyclones. This classification is based on all the cyclones which occurred in a four-year period (between 1996 and 1999). Four cyclones, selected among all the ones which developed during this time-frame, are analyzed. Particularly, the classification allows to discriminate between two cyclones (occurred in October 1996 and in March 1999) which both display a very well

  6. Classification of Regional Ionospheric Disturbances Based on Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Begüm Terzi, Merve; Arikan, Feza; Arikan, Orhan; Karatay, Secil

    2016-07-01

    Ionosphere is an anisotropic, inhomogeneous, time varying and spatio-temporally dispersive medium whose parameters can be estimated almost always by using indirect measurements. Geomagnetic, gravitational, solar or seismic activities cause variations of ionosphere at various spatial and temporal scales. This complex spatio-temporal variability is challenging to be identified due to extensive scales in period, duration, amplitude and frequency of disturbances. Since geomagnetic and solar indices such as Disturbance storm time (Dst), F10.7 solar flux, Sun Spot Number (SSN), Auroral Electrojet (AE), Kp and W-index provide information about variability on a global scale, identification and classification of regional disturbances poses a challenge. The main aim of this study is to classify the regional effects of global geomagnetic storms and classify them according to their risk levels. For this purpose, Total Electron Content (TEC) estimated from GPS receivers, which is one of the major parameters of ionosphere, will be used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. In this work, for the automated classification of the regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. SVM is a supervised learning model used for classification with associated learning algorithm that analyze the data and recognize patterns. In addition to performing linear classification, SVM can efficiently perform nonlinear classification by embedding data into higher dimensional feature spaces. Performance of the developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from the GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing the developed classification

  7. LiDAR point classification based on sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Nan; Pfeifer, Norbert; Liu, Chun

    2017-04-01

    In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with

  8. A wrapper-based approach to image segmentation and classification.

    PubMed

    Farmer, Michael E; Jain, Anil K

    2005-12-01

    The traditional processing flow of segmentation followed by classification in computer vision assumes that the segmentation is able to successfully extract the object of interest from the background image. It is extremely difficult to obtain a reliable segmentation without any prior knowledge about the object that is being extracted from the scene. This is further complicated by the lack of any clearly defined metrics for evaluating the quality of segmentation or for comparing segmentation algorithms. We propose a method of segmentation that addresses both of these issues, by using the object classification subsystem as an integral part of the segmentation. This will provide contextual information regarding the objects to be segmented, as well as allow us to use the probability of correct classification as a metric to determine the quality of the segmentation. We view traditional segmentation as a filter operating on the image that is independent of the classifier, much like the filter methods for feature selection. We propose a new paradigm for segmentation and classification that follows the wrapper methods of feature selection. Our method wraps the segmentation and classification together, and uses the classification accuracy as the metric to determine the best segmentation. By using shape as the classification feature, we are able to develop a segmentation algorithm that relaxes the requirement that the object of interest to be segmented must be homogeneous in some low-level image parameter, such as texture, color, or grayscale. This represents an improvement over other segmentation methods that have used classification information only to modify the segmenter parameters, since these algorithms still require an underlying homogeneity in some parameter space. Rather than considering our method as, yet, another segmentation algorithm, we propose that our wrapper method can be considered as an image segmentation framework, within which existing image segmentation

  9. Iron Oxide Nanoparticle Based Contrast Agents for Magnetic Resonance Imaging.

    PubMed

    Shen, Zheyu; Wu, Aiguo; Chen, Xiaoyuan

    2017-05-01

    Magnetic iron oxide nanoparticles (MIONs) have attracted enormous attention due to their wide applications, including for magnetic separation, for magnetic hyperthermia, and as contrast agents for magnetic resonance imaging (MRI). This review article introduces the methods of synthesizing MIONs, and their application as MRI contrast agents. Currently, many methods have been reported for the synthesis of MIONs. Herein, we only focus on the liquid-based synthesis methods including aqueous phase methods and organic phase methods. In addition, the MIONs larger than 10 nm can be used as negative contrast agents and the recently emerged extremely small MIONs (ES-MIONs) smaller than 5 nm are potential positive contrast agents. In this review, we focus on the ES-MIONs because ES-MIONs avoid the disadvantages of MION-based T2- and gadolinium chelate-based T1-weighted contrast agents.

  10. Towards an agent-oriented programming language based on Scala

    NASA Astrophysics Data System (ADS)

    Mitrović, Dejan; Ivanović, Mirjana; Budimac, Zoran

    2012-09-01

    Scala and its multi-threaded model based on actors represent an excellent framework for developing purely reactive agents. This paper presents an early research on extending Scala with declarative programming constructs, which would result in a new agent-oriented programming language suitable for developing more advanced, BDI agent architectures. The main advantage the new language over many other existing solutions for programming BDI agents is a natural and straightforward integration of imperative and declarative programming constructs, fitted under a single development framework.

  11. Access Control for Agent-based Computing: A Distributed Approach.

    ERIC Educational Resources Information Center

    Antonopoulos, Nick; Koukoumpetsos, Kyriakos; Shafarenko, Alex

    2001-01-01

    Discusses the mobile software agent paradigm that provides a foundation for the development of high performance distributed applications and presents a simple, distributed access control architecture based on the concept of distributed, active authorization entities (lock cells), any combination of which can be referenced by an agent to provide…

  12. Character-based DNA barcoding: a superior tool for species classification.

    PubMed

    Bergmann, Tjard; Hadrys, Heike; Breves, Gerhard; Schierwater, Bernd

    2009-01-01

    In zoonosis research only correct assigned host-agent-vector associations can lead to success. If most biological species on Earth, from agent to host and from procaryotes to vertebrates, are still undetected, the development of a reliable and universal diversity detection tool becomes a conditio sine qua non. In this context, in breathtaking speed, modern molecular-genetic techniques have become acknowledged tools for the classification of life forms at all taxonomic levels. While previous DNA-barcoding techniques were criticised for several reasons (Moritz and Cicero, 2004; Rubinoff et al., 2006a, b; Rubinoff, 2006; Rubinoff and Haines, 2006) a new approach, the so called CAOS-barcoding (Character Attribute Organisation System), avoids most of the weak points. Traditional DNA-barcoding approaches are based on distances, i. e. they use genetic distances and tree construction algorithms for the classification of species or lineages. The definition of limit values is enforced and prohibits a discrete or clear assignment. In comparison, the new character-based barcoding (CAOS-barcoding; DeSalle et al., 2005; DeSalle, 2006; Rach et al., 2008) works with discrete single characters and character combinations which permits a clear, unambiguous classification. In Hannover (Germany) we are optimising this system and developing a semiautomatic high-throughput procedure for hosts, agents and vectors being studied within the Zoonosis Centre of the "Stiftung Tierärztliche Hochschule Hannover". Our primary research is concentrated on insects, the most successful and species-rich animal group on Earth (every fourth animal is a bug). One subgroup, the winged insects (Pterygota), represents the outstanding majority of all zoonosis relevant animal vectors.

  13. Classification data mining method based on dynamic RBF neural networks

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping

    2009-04-01

    With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.

  14. Classification of Hearing Loss Disorders Using Teoae-Based Descriptors

    NASA Astrophysics Data System (ADS)

    Hatzopoulos, Stavros Dimitris

    Transiently Evoked Otoacoustic Emissions (TEOAE) are signals produced by the cochlea upon stimulation by an acoustic click. Within the context of this dissertation, it was hypothesized that the relationship between the TEOAEs and the functional status of the OHCs provided an opportunity for designing a TEOAE-based clinical procedure that could be used to assess cochlear function. To understand the nature of the TEOAE signals in the time and the frequency domain several different analyses were performed. Using normative Input-Output (IO) curves, short-time FFT analyses and cochlear computer simulations, it was found that for optimization of the hearing loss classification it is necessary to use a complete 20 ms TEOAE segment. It was also determined that various 2-D filtering methods (median and averaging filtering masks, LP-FFT) used to enhance of the TEOAE S/N offered minimal improvement (less than 6 dB per stimulus level). Higher S/N improvements resulted in TEOAE sequences that were over-smoothed. The final classification algorithm was based on a statistical analysis of raw FFT data and when applied to a sample set of clinically obtained TEOAE recordings (from 56 normal and 66 hearing-loss subjects) correctly identified 94.3% of the normal and 90% of the hearing loss subjects, at the 80 dB SPL stimulus level. To enhance the discrimination between the conductive and the sensorineural populations, data from the 68 dB SPL stimulus level were used, which yielded a normal classification of 90.2%, a hearing loss classification of 87.5% and a conductive-sensorineural classification of 87%. Among the hearing-loss populations the best discrimination was obtained in the group of otosclerosis and the worst in the group of acute acoustic trauma.

  15. AVNM: A Voting based Novel Mathematical Rule for Image Classification.

    PubMed

    Vidyarthi, Ankit; Mittal, Namita

    2016-12-01

    In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier

  16. Proposed Classification of Auriculotemporal Nerve, Based on the Root System

    PubMed Central

    Komarnitki, Iulian; Tomczyk, Jacek; Ciszek, Bogdan; Zalewska, Marta

    2015-01-01

    The topography of the auriculotemporal nerve (ATN) root system is the main criterion of this nerve classification. Previous publications indicate that ATN may have between one and five roots. Most common is a one- or two-root variant of the nerve structure. The problem of many publications is the inconsistency of nomenclature which concerns the terms “roots”, “connecting branches”, or “branches” that are used to identify the same structures. This study was performed on 80 specimens (40 adults and 40 fetuses) to propose a classification based on: (i) the number of roots, (ii) way of root division, and (iii) configuration of interradicular fibers that form the ATN trunk. This new classification is a remedy for inconsistency of nomenclature of ATN in the infratemporal fossa. This classification system has proven beneficial when organizing all ATN variants described in previous studies and could become a helpful tool for surgeons and dentists. Examination of ATN from the infratemporal fossa of fetuses (the youngest was at 18 weeks gestational age) showed that, at that stage, the nerve is fully developed. PMID:25856464

  17. Classification Under Label Noise Based on Outdated Maps

    NASA Astrophysics Data System (ADS)

    Maas, A.; Rottensteiner, F.; Heipke, C.

    2017-05-01

    Supervised classification of remotely sensed images is a classical method for change detection. The task requires training data in the form of image data with known class labels, whose manually generation is time-consuming. If the labels are acquired from the outdated map, the classifier must cope with errors in the training data. These errors, referred to as label noise, typically occur in clusters in object space, because they are caused by land cover changes over time. In this paper we adapt a label noise tolerant training technique for classification, so that the fact that changes affect larger clusters of pixels is considered. We also integrate the existing map into an iterative classification procedure to act as a prior in regions which are likely to contain changes. Our experiments are based on three test areas, using real images with simulated existing databases. Our results show that this method helps to distinguish between real changes over time and false detections caused by misclassification and thus improves the accuracy of the classification results.

  18. Rule-Based Classification of Chemical Structures by Scaffold.

    PubMed

    Schuffenhauer, Ansgar; Varin, Thibault

    2011-08-01

    Databases for small organic chemical molecules usually contain millions of structures. The screening decks of pharmaceutical companies contain more than a million of structures. Nevertheless chemical substructure searching in these databases can be performed interactively in seconds. Because of this nobody has really missed structural classification of these databases for the purpose of finding data for individual chemical substructures. However, a full deck high-throughput screen produces also activity data for more than a million of substances. How can this amount of data be analyzed? Which are the active scaffolds identified by an assays? To answer such questions systematic classifications of molecules by scaffolds are needed. In this review it is described how molecules can be hierarchically classified by their scaffolds. It is explained how such classifications can be used to identify active scaffolds in an HTS data set. Once active classes are identified, they need to be visualized in the context of related scaffolds in order to understand SAR. Consequently such visualizations are another topic of this review. In addition scaffold based diversity measures are discussed and an outlook is given about the potential impact of structural classifications on a chemically aware semantic web.

  19. Structure-based classification and ontology in chemistry

    PubMed Central

    2012-01-01

    Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic

  20. Structure-based classification and ontology in chemistry.

    PubMed

    Hastings, Janna; Magka, Despoina; Batchelor, Colin; Duan, Lian; Stevens, Robert; Ennis, Marcus; Steinbeck, Christoph

    2012-04-05

    Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic, statistical and logic-based

  1. An AERONET-based aerosol classification using the Mahalanobis distance

    NASA Astrophysics Data System (ADS)

    Hamill, Patrick; Giordano, Marco; Ward, Carolyne; Giles, David; Holben, Brent

    2016-09-01

    We present an aerosol classification based on AERONET aerosol data from 1993 to 2012. We used the AERONET Level 2.0 almucantar aerosol retrieval products to define several reference aerosol clusters which are characteristic of the following general aerosol types: Urban-Industrial, Biomass Burning, Mixed Aerosol, Dust, and Maritime. The classification of a particular aerosol observation as one of these aerosol types is determined by its five-dimensional Mahalanobis distance to each reference cluster. We have calculated the fractional aerosol type distribution at 190 AERONET sites, as well as the monthly variation in aerosol type at those locations. The results are presented on a global map and individually in the supplementary material. Our aerosol typing is based on recognizing that different geographic regions exhibit characteristic aerosol types. To generate reference clusters we only keep data points that lie within a Mahalanobis distance of 2 from the centroid. Our aerosol characterization is based on the AERONET retrieved quantities, therefore it does not include low optical depth values. The analysis is based on "point sources" (the AERONET sites) rather than globally distributed values. The classifications obtained will be useful in interpreting aerosol retrievals from satellite borne instruments.

  2. An AERONET-Based Aerosol Classification Using the Mahalanobis Distance

    NASA Technical Reports Server (NTRS)

    Hamill, Patrick; Giordano, Marco; Ward, Carolyne; Giles, David; Holben, Brent

    2016-01-01

    We present an aerosol classification based on AERONET aerosol data from 1993 to 2012. We used the AERONET Level 2.0 almucantar aerosol retrieval products to define several reference aerosol clusters which are characteristic of the following general aerosol types: Urban-Industrial, Biomass Burning, Mixed Aerosol, Dust, and Maritime. The classification of a particular aerosol observation as one of these aerosol types is determined by its five-dimensional Mahalanobis distance to each reference cluster. We have calculated the fractional aerosol type distribution at 190 AERONET sites, as well as the monthly variation in aerosol type at those locations. The results are presented on a global map and individually in the supplementary material. Our aerosol typing is based on recognizing that different geographic regions exhibit characteristic aerosol types. To generate reference clusters we only keep data points that lie within a Mahalanobis distance of 2 from the centroid. Our aerosol characterization is based on the AERONET retrieved quantities, therefore it does not include low optical depth values. The analysis is based on point sources (the AERONET sites) rather than globally distributed values. The classifications obtained will be useful in interpreting aerosol retrievals from satellite borne instruments.

  3. Exploring complex dynamics in multi agent-based intelligent systems: Theoretical and experimental approaches using the Multi Agent-based Behavioral Economic Landscape (MABEL) model

    NASA Astrophysics Data System (ADS)

    Alexandridis, Konstantinos T.

    This dissertation adopts a holistic and detailed approach to modeling spatially explicit agent-based artificial intelligent systems, using the Multi Agent-based Behavioral Economic Landscape (MABEL) model. The research questions that addresses stem from the need to understand and analyze the real-world patterns and dynamics of land use change from a coupled human-environmental systems perspective. Describes the systemic, mathematical, statistical, socio-economic and spatial dynamics of the MABEL modeling framework, and provides a wide array of cross-disciplinary modeling applications within the research, decision-making and policy domains. Establishes the symbolic properties of the MABEL model as a Markov decision process, analyzes the decision-theoretic utility and optimization attributes of agents towards comprising statistically and spatially optimal policies and actions, and explores the probabilogic character of the agents' decision-making and inference mechanisms via the use of Bayesian belief and decision networks. Develops and describes a Monte Carlo methodology for experimental replications of agent's decisions regarding complex spatial parcel acquisition and learning. Recognizes the gap on spatially-explicit accuracy assessment techniques for complex spatial models, and proposes an ensemble of statistical tools designed to address this problem. Advanced information assessment techniques such as the Receiver-Operator Characteristic curve, the impurity entropy and Gini functions, and the Bayesian classification functions are proposed. The theoretical foundation for modular Bayesian inference in spatially-explicit multi-agent artificial intelligent systems, and the ensembles of cognitive and scenario assessment modular tools build for the MABEL model are provided. Emphasizes the modularity and robustness as valuable qualitative modeling attributes, and examines the role of robust intelligent modeling as a tool for improving policy-decisions related to land

  4. Vehicle Maneuver Detection with Accelerometer-Based Classification

    PubMed Central

    Cervantes-Villanueva, Javier; Carrillo-Zapata, Daniel; Terroso-Saenz, Fernando; Valdes-Vela, Mercedes; Skarmeta, Antonio F.

    2016-01-01

    In the mobile computing era, smartphones have become instrumental tools to develop innovative mobile context-aware systems. In that sense, their usage in the vehicular domain eases the development of novel and personal transportation solutions. In this frame, the present work introduces an innovative mechanism to perceive the current kinematic state of a vehicle on the basis of the accelerometer data from a smartphone mounted in the vehicle. Unlike previous proposals, the introduced architecture targets the computational limitations of such devices to carry out the detection process following an incremental approach. For its realization, we have evaluated different classification algorithms to act as agents within the architecture. Finally, our approach has been tested with a real-world dataset collected by means of the ad hoc mobile application developed. PMID:27690058

  5. LADAR And FLIR Based Sensor Fusion For Automatic Target Classification

    NASA Astrophysics Data System (ADS)

    Selzer, Fred; Gutfinger, Dan

    1989-01-01

    The purpose of this report is to show results of automatic target classification and sensor fusion for forward looking infrared (FLIR) and Laser Radar sensors. The sensor fusion data base was acquired from the Naval Weapon Center and it consists of coregistered Laser RaDAR (range and reflectance image), FLIR (raw and preprocessed image) and TV. Using this data base we have developed techniques to extract relevant object edges from the FLIR and LADAR which are correlated to wireframe models. The resulting correlation coefficients from both the LADAR and FLIR are fused using either the Bayesian or the Dempster-Shafer combination method so as to provide a higher confidence target classifica-tion level output. Finally, to minimize the correlation process the wireframe models are modified to reflect target range (size of target) and target orientation which is extracted from the LADAR reflectance image.

  6. A Sieving ANN for Emotion-Based Movie Clip Classification

    NASA Astrophysics Data System (ADS)

    Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon

    Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.

  7. A simulation-based tutor that reasons about multiple agents

    SciTech Connect

    Rhodes Eliot, C. III; Park Woolf, B.

    1996-12-31

    This paper examines the problem of modeling multiple agents within an intelligent simulation-based tutor. Multiple agent and planning technology were used to enable the system to critique a human agent`s reasoning about multiple agents. This perspective arises naturally whenever a student must learn to lead and coordinate a team of people. The system dynamically selected teaching goals, instantiated plans and modeled the student and the domain as it monitored the student`s progress. The tutor provides one of the first complete integrations of a real-time simulation with knowledge-based reasoning. Other novel techniques of the system are reported, such as common-sense reasoning about plans, reasoning about protocol mechanisms, and using a real-time simulation for training.

  8. Classification of oxidative stress based on its intensity

    PubMed Central

    Lushchak, Volodymyr I.

    2014-01-01

    In living organisms production of reactive oxygen species (ROS) is counterbalanced by their elimination and/or prevention of formation which in concert can typically maintain a steady-state (stationary) ROS level. However, this balance may be disturbed and lead to elevated ROS levels called oxidative stress. To our best knowledge, there is no broadly acceptable system of classification of oxidative stress based on its intensity due to which proposed here system may be helpful for interpretation of experimental data. Oxidative stress field is the hot topic in biology and, to date, many details related to ROS-induced damage to cellular components, ROS-based signaling, cellular responses and adaptation have been disclosed. However, it is common situation when researchers experience substantial difficulties in the correct interpretation of oxidative stress development especially when there is a need to characterize its intensity. Careful selection of specific biomarkers (ROS-modified targets) and some system may be helpful here. A classification of oxidative stress based on its intensity is proposed here. According to this classification there are four zones of function in the relationship between “Dose/concentration of inducer” and the measured “Endpoint”: I – basal oxidative stress (BOS); II – low intensity oxidative stress (LOS); III – intermediate intensity oxidative stress (IOS); IV – high intensity oxidative stress (HOS). The proposed classification will be helpful to describe experimental data where oxidative stress is induced and systematize it based on its intensity, but further studies will be in need to clear discriminate between stress of different intensity. PMID:26417312

  9. Expected energy-based restricted Boltzmann machine for classification.

    PubMed

    Elfwing, S; Uchibe, E; Doya, K

    2015-04-01

    In classification tasks, restricted Boltzmann machines (RBMs) have predominantly been used in the first stage, either as feature extractors or to provide initialization of neural networks. In this study, we propose a discriminative learning approach to provide a self-contained RBM method for classification, inspired by free-energy based function approximation (FE-RBM), originally proposed for reinforcement learning. For classification, the FE-RBM method computes the output for an input vector and a class vector by the negative free energy of an RBM. Learning is achieved by stochastic gradient-descent using a mean-squared error training objective. In an earlier study, we demonstrated that the performance and the robustness of FE-RBM function approximation can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that the learning performance of RBM function approximation can be further improved by computing the output by the negative expected energy (EE-RBM), instead of the negative free energy. To create a deep learning architecture, we stack several RBMs on top of each other. We also connect the class nodes to all hidden layers to try to improve the performance even further. We validate the classification performance of EE-RBM using the MNIST data set and the NORB data set, achieving competitive performance compared with other classifiers such as standard neural networks, deep belief networks, classification RBMs, and support vector machines. The purpose of using the NORB data set is to demonstrate that EE-RBM with binary input nodes can achieve high performance in the continuous input domain.

  10. A proposed classification scheme for Ada-based software products

    NASA Technical Reports Server (NTRS)

    Cernosek, Gary J.

    1986-01-01

    As the requirements for producing software in the Ada language become a reality for projects such as the Space Station, a great amount of Ada-based program code will begin to emerge. Recognizing the potential for varying levels of quality to result in Ada programs, what is needed is a classification scheme that describes the quality of a software product whose source code exists in Ada form. A 5-level classification scheme is proposed that attempts to decompose this potentially broad spectrum of quality which Ada programs may possess. The number of classes and their corresponding names are not as important as the mere fact that there needs to be some set of criteria from which to evaluate programs existing in Ada. An exact criteria for each class is not presented, nor are any detailed suggestions of how to effectively implement this quality assessment. The idea of Ada-based software classification is introduced and a set of requirements from which to base further research and development is suggested.

  11. Agent-based method for distributed clustering of textual information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  12. AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*

    PubMed Central

    Bruch, Elizabeth; Atwell, Jon

    2014-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351

  13. Agent-Based Modeling of Growth Processes

    ERIC Educational Resources Information Center

    Abraham, Ralph

    2014-01-01

    Growth processes abound in nature, and are frequently the target of modeling exercises in the sciences. In this article we illustrate an agent-based approach to modeling, in the case of a single example from the social sciences: bullying.

  14. Agent-Based Modeling of Growth Processes

    ERIC Educational Resources Information Center

    Abraham, Ralph

    2014-01-01

    Growth processes abound in nature, and are frequently the target of modeling exercises in the sciences. In this article we illustrate an agent-based approach to modeling, in the case of a single example from the social sciences: bullying.

  15. Gadolinium-Based Contrast Agents for MR Cancer Imaging

    PubMed Central

    Zhou, Zhuxian; Lu, Zheng-Rong

    2013-01-01

    Magnetic resonance imaging (MRI) is a clinical imaging modality effective for anatomical and functional imaging of diseased soft tissues, including solid tumors. MRI contrast agents have been routinely used for detecting tumor at an early stage. Gadolinium based contrast agents are the most commonly used contrast agents in clinical MRI. There have been significant efforts to design and develop novel Gd(III) contrast agents with high relaxivity, low toxicity and specific tumor binding. The relaxivity of the Gd(III) contrast agents can be increased by proper chemical modification. The toxicity of Gd(III) contrast agents can be reduced by increasing the agents’ thermodynamic and kinetic stability, as well as optimizing their pharmacokinetic properties. The increasing knowledge in the field of cancer genomics and biology provides an opportunity for designing tumor-specific contrast agents. Various new Gd(III) chelates have been designed and evaluated in animal models for more effective cancer MRI. This review outlines the design and development, physicochemical properties, and in vivo properties of several classes of Gd(III)-based MR contrast agents for tumor imaging. PMID:23047730

  16. Classification of proximal humeral fractures based on a pathomorphologic analysis.

    PubMed

    Resch, Herbert; Tauber, Mark; Neviaser, Robert J; Neviaser, Andrew S; Majed, Addie; Halsey, Tim; Hirzinger, Corinna; Al-Yassari, Ghassan; Zyto, Karol; Moroder, Philipp

    2016-03-01

    The purpose of this study was to analyze the pathomorphology of proximal humeral fractures to determine relevant and reliable parameters for fracture classification. A total of 100 consecutive acute proximal humeral fractures in adult patients were analyzed by 2 non-independent observers from a single shoulder department using a standardized protocol based on biplane radiographs and 3-dimensional computed tomography scans. A fracture classification system based on the most reliable key features of the pathomorphologic analysis was created, and its reliability was tested by 6 independent shoulder experts analyzing another 100 consecutive proximal humeral fractures. The head position in relation to the shaft (varus, valgus, sagittal deformity) and the presence of tuberosity fractures showed a higher interobserver reliability (κ > 0.8) than measurements for medial hinge, shaft, and tuberosity displacement, metaphyseal extension, fracture impaction, as well as head-split component identification (κ < 0.7). These findings were used to classify nondisplaced proximal humeral fractures as type 1, fractures with normal coronal head position but sagittal deformity as type 2, valgus fractures as type 3, varus fractures as type 4, and fracture dislocations as type 5. The fracture type was further combined with the fractured main fragments (G for greater tuberosity, L for lesser). Interobserver and intraobserver reliability analysis for the fracture classification revealed a κ value (95% confidence interval) of 0.700 (0.631-0.767) and 0.917 (0.879-0.943), respectively. The new classification system with emphasis on the qualitative aspects of proximal humeral fractures showed high reliability when based on a standardized imaging protocol including computed tomography scans. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  17. Optimization Based Tumor Classification from Microarray Gene Expression Data

    PubMed Central

    Dagliyan, Onur; Uney-Yuksektepe, Fadime; Kavakli, I. Halil; Turkay, Metin

    2011-01-01

    Background An important use of data obtained from microarray measurements is the classification of tumor types with respect to genes that are either up or down regulated in specific cancer types. A number of algorithms have been proposed to obtain such classifications. These algorithms usually require parameter optimization to obtain accurate results depending on the type of data. Additionally, it is highly critical to find an optimal set of markers among those up or down regulated genes that can be clinically utilized to build assays for the diagnosis or to follow progression of specific cancer types. In this paper, we employ a mixed integer programming based classification algorithm named hyper-box enclosure method (HBE) for the classification of some cancer types with a minimal set of predictor genes. This optimization based method which is a user friendly and efficient classifier may allow the clinicians to diagnose and follow progression of certain cancer types. Methodology/Principal Findings We apply HBE algorithm to some well known data sets such as leukemia, prostate cancer, diffuse large B-cell lymphoma (DLBCL), small round blue cell tumors (SRBCT) to find some predictor genes that can be utilized for diagnosis and prognosis in a robust manner with a high accuracy. Our approach does not require any modification or parameter optimization for each data set. Additionally, information gain attribute evaluator, relief attribute evaluator and correlation-based feature selection methods are employed for the gene selection. The results are compared with those from other studies and biological roles of selected genes in corresponding cancer type are described. Conclusions/Significance The performance of our algorithm overall was better than the other algorithms reported in the literature and classifiers found in WEKA data-mining package. Since it does not require a parameter optimization and it performs consistently very high prediction rate on different type of

  18. The Study on Collaborative Manufacturing Platform Based on Agent

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-yan; Qu, Zheng-geng

    To fulfill the trends of knowledge-intensive in collaborative manufacturing development, we have described multi agent architecture supporting knowledge-based platform of collaborative manufacturing development platform. In virtue of wrapper service and communication capacity agents provided, the proposed architecture facilitates organization and collaboration of multi-disciplinary individuals and tools. By effectively supporting the formal representation, capture, retrieval and reuse of manufacturing knowledge, the generalized knowledge repository based on ontology library enable engineers to meaningfully exchange information and pass knowledge across boundaries. Intelligent agent technology increases traditional KBE systems efficiency and interoperability and provides comprehensive design environments for engineers.

  19. Scalable, distributed data mining using an agent based architecture

    SciTech Connect

    Kargupta, H.; Hamzaoglu, I.; Stafford, B.

    1997-05-01

    Algorithm scalability and the distributed nature of both data and computation deserve serious attention in the context of data mining. This paper presents PADMA (PArallel Data Mining Agents), a parallel agent based system, that makes an effort to address these issues. PADMA contains modules for (1) parallel data accessing operations, (2) parallel hierarchical clustering, and (3) web-based data visualization. This paper describes the general architecture of PADMA and experimental results.

  20. The fractional volatility model: An agent-based interpretation

    NASA Astrophysics Data System (ADS)

    Vilela Mendes, R.

    2008-06-01

    Based on the criteria of mathematical simplicity and consistency with empirical market data, a model with volatility driven by fractional noise has been constructed which provides a fairly accurate mathematical parametrization of the data. Here, some features of the model are reviewed and extended to account for leverage effects. Using agent-based models, one tries to find which agent strategies and (or) properties of the financial institutions might be responsible for the features of the fractional volatility model.

  1. Biodegradability of ethylenediamine-based complexing agents.

    PubMed

    Sýkora, V; Pitter, P; Bittnerová, I; Lederer, T

    2001-06-01

    Biological degradability of ethylenediamine derivatives depends on the type and number of substituents. The susceptibility to biodegradation decreases in the sequence of substituents -COCH3, -CH3, -C2H5, -CH2CH2OH, -CH2COOH and with polysubstitution. The biodegradability depends also on the kind and number of nitrogen atoms. Complexing agents with a single-nitrogen atom in the molecule (e.g. NTA) succumb relatively readily to biodegradation whereas, compounds with two or more tertiary amino groups are biologically highly stable and do not undergo biodegradation even in experiments with activated sludge adapted at an age of up to 30 days (EDTA, DTPA, PDTA, HEDTA). A lowering of the degree of substitution brings about an increased susceptibility to biodegradation. This holds, e.g., for replacement of tertiary amino groups with secondary ones; thus the symmetrically disubstituted ethylenediamine-N,N'-diacetic acid (EDDA) possesses still sufficient complexing ability while belonging already to the group of potentially degradable substances.

  2. G0-WISHART Distribution Based Classification from Polarimetric SAR Images

    NASA Astrophysics Data System (ADS)

    Hu, G. C.; Zhao, Q. H.

    2017-09-01

    Enormous scientific and technical developments have been carried out to further improve the remote sensing for decades, particularly Polarimetric Synthetic Aperture Radar(PolSAR) technique, so classification method based on PolSAR images has getted much more attention from scholars and related department around the world. The multilook polarmetric G0-Wishart model is a more flexible model which describe homogeneous, heterogeneous and extremely heterogeneous regions in the image. Moreover, the polarmetric G0-Wishart distribution dose not include the modified Bessel function of the second kind. It is a kind of simple statistical distribution model with less parameter. To prove its feasibility, a process of classification has been tested with the full-polarized Synthetic Aperture Radar (SAR) image by the method. First, apply multilook polarimetric SAR data process and speckle filter to reduce speckle influence for classification result. Initially classify the image into sixteen classes by H/A/α decomposition. Using the ICM algorithm to classify feature based on the G0-Wshart distance. Qualitative and quantitative results show that the proposed method can classify polaimetric SAR data effectively and efficiently.

  3. Changing Histopathological Diagnostics by Genome-Based Tumor Classification

    PubMed Central

    Kloth, Michael; Buettner, Reinhard

    2014-01-01

    Traditionally, tumors are classified by histopathological criteria, i.e., based on their specific morphological appearances. Consequently, current therapeutic decisions in oncology are strongly influenced by histology rather than underlying molecular or genomic aberrations. The increase of information on molecular changes however, enabled by the Human Genome Project and the International Cancer Genome Consortium as well as the manifold advances in molecular biology and high-throughput sequencing techniques, inaugurated the integration of genomic information into disease classification. Furthermore, in some cases it became evident that former classifications needed major revision and adaption. Such adaptations are often required by understanding the pathogenesis of a disease from a specific molecular alteration, using this molecular driver for targeted and highly effective therapies. Altogether, reclassifications should lead to higher information content of the underlying diagnoses, reflecting their molecular pathogenesis and resulting in optimized and individual therapeutic decisions. The objective of this article is to summarize some particularly important examples of genome-based classification approaches and associated therapeutic concepts. In addition to reviewing disease specific markers, we focus on potentially therapeutic or predictive markers and the relevance of molecular diagnostics in disease monitoring. PMID:24879454

  4. The DTW-based representation space for seismic pattern classification

    NASA Astrophysics Data System (ADS)

    Orozco-Alzate, Mauricio; Castro-Cabrera, Paola Alexandra; Bicego, Manuele; Londoño-Bonilla, John Makario

    2015-12-01

    Distinguishing among the different seismic volcanic patterns is still one of the most important and labor-intensive tasks for volcano monitoring. This task could be lightened and made free from subjective bias by using automatic classification techniques. In this context, a core but often overlooked issue is the choice of an appropriate representation of the data to be classified. Recently, it has been suggested that using a relative representation (i.e. proximities, namely dissimilarities on pairs of objects) instead of an absolute one (i.e. features, namely measurements on single objects) is advantageous to exploit the relational information contained in the dissimilarities to derive highly discriminant vector spaces, where any classifier can be used. According to that motivation, this paper investigates the suitability of a dynamic time warping (DTW) dissimilarity-based vector representation for the classification of seismic patterns. Results show the usefulness of such a representation in the seismic pattern classification scenario, including analyses of potential benefits from recent advances in the dissimilarity-based paradigm such as the proper selection of representation sets and the combination of different dissimilarity representations that might be available for the same data.

  5. A science based approach to topical drug classification system (TCS).

    PubMed

    Shah, Vinod P; Yacobi, Avraham; Rădulescu, Flavian Ştefan; Miron, Dalia Simona; Lane, Majella E

    2015-08-01

    The Biopharmaceutics Classification System (BCS) for oral immediate release solid drug products has been very successful; its implementation in drug industry and regulatory approval has shown significant progress. This has been the case primarily because BCS was developed using sound scientific judgment. Following the success of BCS, we have considered the topical drug products for similar classification system based on sound scientific principles. In USA, most of the generic topical drug products have qualitatively (Q1) and quantitatively (Q2) same excipients as the reference listed drug (RLD). The applications of in vitro release (IVR) and in vitro characterization are considered for a range of dosage forms (suspensions, creams, ointments and gels) of differing strengths. We advance a Topical Drug Classification System (TCS) based on a consideration of Q1, Q2 as well as the arrangement of matter and microstructure of topical formulations (Q3). Four distinct classes are presented for the various scenarios that may arise and depending on whether biowaiver can be granted or not.

  6. Statistical Analysis of Q-matrix Based Diagnostic Classification Models

    PubMed Central

    Chen, Yunxiao; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang

    2014-01-01

    Diagnostic classification models have recently gained prominence in educational assessment, psychiatric evaluation, and many other disciplines. Central to the model specification is the so-called Q-matrix that provides a qualitative specification of the item-attribute relationship. In this paper, we develop theories on the identifiability for the Q-matrix under the DINA and the DINO models. We further propose an estimation procedure for the Q-matrix through the regularized maximum likelihood. The applicability of this procedure is not limited to the DINA or the DINO model and it can be applied to essentially all Q-matrix based diagnostic classification models. Simulation studies are conducted to illustrate its performance. Furthermore, two case studies are presented. The first case is a data set on fraction subtraction (educational application) and the second case is a subsample of the National Epidemiological Survey on Alcohol and Related Conditions concerning the social anxiety disorder (psychiatric application). PMID:26294801

  7. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.

  8. Genome-based microorganism classification using coalition formulation game.

    PubMed

    Chung, Byung Chang; Han, Gyu-Bum; Cho, Dong-Ho

    2015-01-01

    Genome-based microorganism classification is the one of interesting issues in microorganism taxonomy. However, the advance in sequencing technology requires a low-complex algorithm to process a great amount of bio sequence data. In this paper, we suggest a coalition formation game for microorganism classification, which can be implemented in distributed manner. We extract word frequency feature from microorganism sequences and formulate the coalition game model that considers the distance among word frequency features. Then, we propose a coalition formation algorithm for clustering microorganisms with feature similarity. The performance of proposed algorithm is compared with that of conventional schemes by means of an experiment. According to the result, we showed that the correctness of proposed distributed algorithm is similar to that of conventional centralized schemes.

  9. Employing wavelet-based texture features in ammunition classification

    NASA Astrophysics Data System (ADS)

    Borzino, Ángelo M. C. R.; Maher, Robert C.; Apolinário, José A.; de Campos, Marcello L. R.

    2017-05-01

    Pattern recognition, a branch of machine learning, involves classification of information in images, sounds, and other digital representations. This paper uses pattern recognition to identify which kind of ammunition was used when a bullet was fired based on a carefully constructed set of gunshot sound recordings. To do this task, we show that texture features obtained from the wavelet transform of a component of the gunshot signal, treated as an image, and quantized in gray levels, are good ammunition discriminators. We test the technique with eight different calibers and achieve a classification rate better than 95%. We also compare the performance of the proposed method with results obtained by standard temporal and spectrographic techniques

  10. Semantic analysis based forms information retrieval and classification

    NASA Astrophysics Data System (ADS)

    Saba, Tanzila; Alqahtani, Fatimah Ayidh

    2013-09-01

    Data entry forms are employed in all types of enterprises to collect hundreds of customer's information on daily basis. The information is filled manually by the customers. Hence, it is laborious and time consuming to use human operator to transfer these customers information into computers manually. Additionally, it is expensive and human errors might cause serious flaws. The automatic interpretation of scanned forms has facilitated many real applications from speed and accuracy point of view such as keywords spotting, sorting of postal addresses, script matching and writer identification. This research deals with different strategies to extract customer's information from these scanned forms, interpretation and classification. Accordingly, extracted information is segmented into characters for their classification and finally stored in the forms of records in databases for their further processing. This paper presents a detailed discussion of these semantic based analysis strategies for forms processing. Finally, new directions are also recommended for future research. [Figure not available: see fulltext.

  11. A novel classification method based on membership function

    NASA Astrophysics Data System (ADS)

    Peng, Yaxin; Shen, Chaomin; Wang, Lijia; Zhang, Guixu

    2011-03-01

    We propose a method for medical image classification using membership function. Our aim is to classify the image as several classes based on a prior knowledge. For every point, we calculate its membership function, i.e., the probability that the point belongs to each class. The point is finally labeled as the class with the highest value of membership function. The classification is reduced to a minimization problem of a functional with arguments of membership functions. Three novelties are in our paper. First, bias correction and Rudin-Osher-Fatemi (ROF) model are adopted to the input image to enhance the image quality. Second, unconstrained functional is used. We use variable substitution to avoid the constraints that membership functions should be positive and with sum one. Third, several techniques are used to fasten the computation. The experimental result of ventricle shows the validity of this approach.

  12. SNMFCA: supervised NMF-based image classification and annotation.

    PubMed

    Jing, Liping; Zhang, Chao; Ng, Michael K

    2012-11-01

    In this paper, we propose a novel supervised nonnegative matrix factorization-based framework for both image classification and annotation. The framework consists of two phases: training and prediction. In the training phase, two supervised nonnegative matrix factorizations for image descriptors and annotation terms are combined to identify the latent image bases, and to represent the training images in the bases space. These latent bases can capture the representation of the images in terms of both descriptors and annotation terms. Based on the new representation of training images, classifiers can be learnt and built. In the prediction phase, a test image is first represented by the latent bases via solving a linear least squares problem, and then its class label and annotation can be predicted via the trained classifiers and the proposed annotation mapping model. In the algorithm, we develop a three-block proximal alternating nonnegative least squares algorithm to determine the latent image bases, and show its convergent property. Extensive experiments on real-world image data sets suggest that the proposed framework is able to predict the label and annotation for testing images successfully. Experimental results have also shown that our algorithm is computationally efficient and effective for image classification and annotation.

  13. A Multiagent-based Intrusion Detection System with the Support of Multi-Class Supervised Classification

    NASA Astrophysics Data System (ADS)

    Shyu, Mei-Ling; Sainani, Varsha

    The increasing number of network security related incidents have made it necessary for the organizations to actively protect their sensitive data with network intrusion detection systems (IDSs). IDSs are expected to analyze a large volume of data while not placing a significantly added load on the monitoring systems and networks. This requires good data mining strategies which take less time and give accurate results. In this study, a novel data mining assisted multiagent-based intrusion detection system (DMAS-IDS) is proposed, particularly with the support of multiclass supervised classification. These agents can detect and take predefined actions against malicious activities, and data mining techniques can help detect them. Our proposed DMAS-IDS shows superior performance compared to central sniffing IDS techniques, and saves network resources compared to other distributed IDS with mobile agents that activate too many sniffers causing bottlenecks in the network. This is one of the major motivations to use a distributed model based on multiagent platform along with a supervised classification technique.

  14. Evolutionary game theory using agent-based methods.

    PubMed

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. In vitro antimicrobial activity of peroxide-based bleaching agents.

    PubMed

    Napimoga, Marcelo Henrique; de Oliveira, Rogério; Reis, André Figueiredo; Gonçalves, Reginaldo Bruno; Giannini, Marcelo

    2007-06-01

    Antibacterial activity of 4 commercial bleaching agents (Day White, Colgate Platinum, Whiteness 10% and 16%) on 6 oral pathogens (Streptococcus mutans, Streptococcus sobrinus, Streptococcus sanguinis, Candida albicans, Lactobacillus casei, and Lactobacillus acidophilus) and Staphylococcus aureus were evaluated. A chlorhexidine solution was used as a positive control, while distilled water was the negative control. Bleaching agents and control materials were inserted in sterilized stainless-steel cylinders that were positioned under inoculated agar plate (n = 4). After incubation according to the appropriate period of time for each microorganism, the inhibition zones were measured. Data were analyzed by 2-way analysis of variance and Tukey test (a = 0.05). All bleaching agents and the chlorhexidine solution produced antibacterial inhibition zones. Antimicrobial activity was dependent on peroxide-based bleaching agents. For most microorganisms evaluated, bleaching agents produced inhibition zones similar to or larger than that observed for chlorhexidine. C albicans, L casei, and L acidophilus were the most resistant microorganisms.

  16. Agent based modeling of the coevolution of hostility and pacifism

    NASA Astrophysics Data System (ADS)

    Dalmagro, Fermin; Jimenez, Juan

    2015-01-01

    We propose a model based on a population of agents whose states represent either hostile or peaceful behavior. Randomly selected pairs of agents interact according to a variation of the Prisoners Dilemma game, and the probabilities that the agents behave aggressively or not are constantly updated by the model so that the agents that remain in the game are those with the highest fitness. We show that the population of agents oscillate between generalized conflict and global peace, without either reaching a stable state. We then use this model to explain some of the emergent behaviors in collective conflicts, by comparing the simulated results with empirical data obtained from social systems. In particular, using public data reports we show how the model precisely reproduces interesting quantitative characteristics of diverse types of armed conflicts, public protests, riots and strikes.

  17. Laser-based instrumentation for the detection of chemical agents

    SciTech Connect

    Hartford, A. Jr.; Sander, R.K.; Quigley, G.P.; Radziemski, L.J.; Cremers, D.A.

    1982-01-01

    Several laser-based techniques are being evaluated for the remote, point, and surface detection of chemical agents. Among the methods under investigation are optoacoustic spectroscopy, laser-induced breakdown spectroscopy (LIBS), and synchronous detection of laser-induced fluorescence (SDLIF). Optoacoustic detection has already been shown to be capable of extremely sensitive point detection. Its application to remote sensing of chemical agents is currently being evaluated. Atomic emission from the region of a laser-generated plasma has been used to identify the characteristic elements contained in nerve (P and F) and blister (S and Cl) agents. Employing this LIBS approach, detection of chemical agent simulants dispersed in air and adsorbed on a variety of surfaces has been achieved. Synchronous detection of laser-induced fluorescence provides an attractive alternative to conventional LIF, in that an artificial narrowing of the fluorescence emission is obtained. The application of this technique to chemical agent simulants has been successfully demonstrated. 19 figures.

  18. A global analysis on water-based fire extinguishing agent

    NASA Astrophysics Data System (ADS)

    WANG, Shuai

    2017-04-01

    Due to the superiority of the attribute of water, water-based fire extinguishing agent is considered as one of most effectively fire extinguishing agents. NFPA has developed two standards regarding to water-based fire extinguishing agents. ISO technical committee working group is also preparing for developing a standard about the subject fire extinguishing agent. China also has its own national GB standard about water-based standard. This paper aims at to elaborate standard requirements and methods in different technical documents and standards currently available around the world with a view to summarize the main concern in different standards, and trying to find out valuable information for readers in future research and development.

  19. Multi-issue Agent Negotiation Based on Fairness

    NASA Astrophysics Data System (ADS)

    Zuo, Baohe; Zheng, Sue; Wu, Hong

    Agent-based e-commerce service has become a hotspot now. How to make the agent negotiation process quickly and high-efficiently is the main research direction of this area. In the multi-issue model, MAUT(Multi-attribute Utility Theory) or its derived theory usually consider little about the fairness of both negotiators. This work presents a general model of agent negotiation which considered the satisfaction of both negotiators via autonomous learning. The model can evaluate offers from the opponent agent based on the satisfaction degree, learn online to get the opponent's knowledge from interactive instances of history and negotiation of this time, make concessions dynamically based on fair object. Through building the optimal negotiation model, the bilateral negotiation achieved a higher efficiency and fairer deal.

  20. Soil classification basing on the spectral characteristics of topsoil samples

    NASA Astrophysics Data System (ADS)

    Liu, Huanjun; Zhang, Xiaokang; Zhang, Xinle

    2016-04-01

    Soil taxonomy plays an important role in soil utility and management, but China has only course soil map created based on 1980s data. New technology, e.g. spectroscopy, could simplify soil classification. The study try to classify soils basing on the spectral characteristics of topsoil samples. 148 topsoil samples of typical soils, including Black soil, Chernozem, Blown soil and Meadow soil, were collected from Songnen plain, Northeast China, and the room spectral reflectance in the visible and near infrared region (400-2500 nm) were processed with weighted moving average, resampling technique, and continuum removal. Spectral indices were extracted from soil spectral characteristics, including the second absorption positions of spectral curve, the first absorption vale's area, and slope of spectral curve at 500-600 nm and 1340-1360 nm. Then K-means clustering and decision tree were used respectively to build soil classification model. The results indicated that 1) the second absorption positions of Black soil and Chernozem were located at 610 nm and 650 nm respectively; 2) the spectral curve of the meadow is similar to its adjacent soil, which could be due to soil erosion; 3) decision tree model showed higher classification accuracy, and accuracy of Black soil, Chernozem, Blown soil and Meadow are 100%, 88%, 97%, 50% respectively, and the accuracy of Blown soil could be increased to 100% by adding one more spectral index (the first two vole's area) to the model, which showed that the model could be used for soil classification and soil map in near future.

  1. Rule based fuzzy logic approach for classification of fibromyalgia syndrome.

    PubMed

    Arslan, Evren; Yildiz, Sedat; Albayrak, Yalcin; Koklukaya, Etem

    2016-06-01

    Fibromyalgia syndrome (FMS) is a chronic muscle and skeletal system disease observed generally in women, manifesting itself with a widespread pain and impairing the individual's quality of life. FMS diagnosis is made based on the American College of Rheumatology (ACR) criteria. However, recently the employability and sufficiency of ACR criteria are under debate. In this context, several evaluation methods, including clinical evaluation methods were proposed by researchers. Accordingly, ACR had to update their criteria announced back in 1990, 2010 and 2011. Proposed rule based fuzzy logic method aims to evaluate FMS at a different angle as well. This method contains a rule base derived from the 1990 ACR criteria and the individual experiences of specialists. The study was conducted using the data collected from 60 inpatient and 30 healthy volunteers. Several tests and physical examination were administered to the participants. The fuzzy logic rule base was structured using the parameters of tender point count, chronic widespread pain period, pain severity, fatigue severity and sleep disturbance level, which were deemed important in FMS diagnosis. It has been observed that generally fuzzy predictor was 95.56 % consistent with at least of the specialists, who are not a creator of the fuzzy rule base. Thus, in diagnosis classification where the severity of FMS was classified as well, consistent findings were obtained from the comparison of interpretations and experiences of specialists and the fuzzy logic approach. The study proposes a rule base, which could eliminate the shortcomings of 1990 ACR criteria during the FMS evaluation process. Furthermore, the proposed method presents a classification on the severity of the disease, which was not available with the ACR criteria. The study was not limited to only disease classification but at the same time the probability of occurrence and severity was classified. In addition, those who were not suffering from FMS were

  2. Performance verification of a LIF-LIDAR technique for stand-off detection and classification of biological agents

    NASA Astrophysics Data System (ADS)

    Wojtanowski, Jacek; Zygmunt, Marek; Muzal, Michał; Knysak, Piotr; Młodzianko, Andrzej; Gawlikowski, Andrzej; Drozd, Tadeusz; Kopczyński, Krzysztof; Mierczyk, Zygmunt; Kaszczuk, Mirosława; Traczyk, Maciej; Gietka, Andrzej; Piotrowski, Wiesław; Jakubaszek, Marcin; Ostrowski, Roman

    2015-04-01

    LIF (laser-induced fluorescence) LIDAR (light detection and ranging) is one of the very few promising methods in terms of long-range stand-off detection of air-borne biological particles. A limited classification of the detected material also appears as a feasible asset. We present the design details and hardware setup of the developed range-resolved multichannel LIF-LIDAR system. The device is based on two pulsed UV laser sources operating at 355 nm and 266 nm wavelength (3rd and 4th harmonic of Nd:YAG, Q-switched solid-state laser, respectively). Range-resolved fluorescence signals are collected in 28 channels of compound PMT sensor coupled with Czerny-Turner spectrograph. The calculated theoretical sensitivities are confronted with the results obtained during measurement field campaign. Classification efforts based on 28-digit fluorescence spectral signatures linear processing are also presented.

  3. Towards a framework for agent-based image analysis of remote-sensing data.

    PubMed

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  4. Towards a framework for agent-based image analysis of remote-sensing data

    PubMed Central

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-01-01

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916

  5. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid

  6. Classification of Watersheds for Bioassessment Based on Hydrological Variables

    NASA Astrophysics Data System (ADS)

    Chinnayakanahalli, K. J.; Tarboton, D. G.; Hawkins, C. P.

    2007-12-01

    A procedure for the classification of watersheds for bioassessment based on their streamflow regime and prediction of hydrologic class from watershed attributes is presented. We first identified a set of stream flow regime variables relevant to biota for the purposes of characterizing the invertebrate population in a stream, that can be abstracted from long term streamflow data measured at gauged sites. The selection of these variables was based on the past literature and discussions with stream ecologists. The following variables were selected: 1) base flow index (BFI) 2) daily coefficient of variation (DAYCV) 3) average daily flow (QMEAN), 4) Number of zero flow days (ZERODAY) 5) bank full flow (Q1.67) 6) Colwell's index 7) seven day minimum (7Qmin) 8) seven day maximum (7Qmax) 9) number of flow reversals (NOR) and 10) flood frequency. These variables were computed at 543 minimally impacted stream gage stations in the thirteen states of Western US. Principal Component Analysis (PCA) and K-means clustering analysis was then used to classify the watersheds into hydrologically different groups. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART) and Random Forests (RF) models were then developed to predict the class of an ungauged watershed from watershed attributes (climate, geomorphic, geology and soil attributes). We developed a series of classifications (with K equal to 4 to 8 in K-means clustering) that showed a strong geographical structure. The classification is sensitive to the quantity of water present in the stream and it also identified streams that appear similar at monthly time scale but are significantly different at the daily time scale. These differences are important to identify the variation in the biota. For the prediction of watershed class from watershed attributes we found that the RF model was slightly better than the other modeling approaches evaluated (LDA, CART). The class characterized by high BFI was difficult

  7. An Agent-Based Interface to Terrestrial Ecological Forecasting

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Nemani, Ramakrishna; Pang, Wan-Lin; Votava, Petr; Etzioni, Oren

    2004-01-01

    This paper describes a flexible agent-based ecological forecasting system that combines multiple distributed data sources and models to provide near-real-time answers to questions about the state of the Earth system We build on novel techniques in automated constraint-based planning and natural language interfaces to automatically generate data products based on descriptions of the desired data products.

  8. The Development of Sugar-Based Anti-Melanogenic Agents.

    PubMed

    Bin, Bum-Ho; Kim, Sung Tae; Bhin, Jinhyuk; Lee, Tae Ryong; Cho, Eun-Gyung

    2016-04-16

    The regulation of melanin production is important for managing skin darkness and hyperpigmentary disorders. Numerous anti-melanogenic agents that target tyrosinase activity/stability, melanosome maturation/transfer, or melanogenesis-related signaling pathways have been developed. As a rate-limiting enzyme in melanogenesis, tyrosinase has been the most attractive target, but tyrosinase-targeted treatments still pose serious potential risks, indicating the necessity of developing lower-risk anti-melanogenic agents. Sugars are ubiquitous natural compounds found in humans and other organisms. Here, we review the recent advances in research on the roles of sugars and sugar-related agents in melanogenesis and in the development of sugar-based anti-melanogenic agents. The proposed mechanisms of action of these agents include: (a) (natural sugars) disturbing proper melanosome maturation by inducing osmotic stress and inhibiting the PI3 kinase pathway and (b) (sugar derivatives) inhibiting tyrosinase maturation by blocking N-glycosylation. Finally, we propose an alternative strategy for developing anti-melanogenic sugars that theoretically reduce melanosomal pH by inhibiting a sucrose transporter and reduce tyrosinase activity by inhibiting copper incorporation into an active site. These studies provide evidence of the utility of sugar-based anti-melanogenic agents in managing skin darkness and curing pigmentary disorders and suggest a future direction for the development of physiologically favorable anti-melanogenic agents.

  9. The Development of Sugar-Based Anti-Melanogenic Agents

    PubMed Central

    Bin, Bum-Ho; Kim, Sung Tae; Bhin, Jinhyuk; Lee, Tae Ryong; Cho, Eun-Gyung

    2016-01-01

    The regulation of melanin production is important for managing skin darkness and hyperpigmentary disorders. Numerous anti-melanogenic agents that target tyrosinase activity/stability, melanosome maturation/transfer, or melanogenesis-related signaling pathways have been developed. As a rate-limiting enzyme in melanogenesis, tyrosinase has been the most attractive target, but tyrosinase-targeted treatments still pose serious potential risks, indicating the necessity of developing lower-risk anti-melanogenic agents. Sugars are ubiquitous natural compounds found in humans and other organisms. Here, we review the recent advances in research on the roles of sugars and sugar-related agents in melanogenesis and in the development of sugar-based anti-melanogenic agents. The proposed mechanisms of action of these agents include: (a) (natural sugars) disturbing proper melanosome maturation by inducing osmotic stress and inhibiting the PI3 kinase pathway and (b) (sugar derivatives) inhibiting tyrosinase maturation by blocking N-glycosylation. Finally, we propose an alternative strategy for developing anti-melanogenic sugars that theoretically reduce melanosomal pH by inhibiting a sucrose transporter and reduce tyrosinase activity by inhibiting copper incorporation into an active site. These studies provide evidence of the utility of sugar-based anti-melanogenic agents in managing skin darkness and curing pigmentary disorders and suggest a future direction for the development of physiologically favorable anti-melanogenic agents. PMID:27092497

  10. Risk Classification and Risk-based Safety and Mission Assurance

    NASA Technical Reports Server (NTRS)

    Leitner, Jesse A.

    2014-01-01

    Recent activities to revamp and emphasize the need to streamline processes and activities for Class D missions across the agency have led to various interpretations of Class D, including the lumping of a variety of low-cost projects into Class D. Sometimes terms such as Class D minus are used. In this presentation, mission risk classifications will be traced to official requirements and definitions as a measure to ensure that projects and programs align with the guidance and requirements that are commensurate for their defined risk posture. As part of this, the full suite of risk classifications, formal and informal will be defined, followed by an introduction to the new GPR 8705.4 that is currently under review.GPR 8705.4 lays out guidance for the mission success activities performed at the Classes A-D for NPR 7120.5 projects as well as for projects not under NPR 7120.5. Furthermore, the trends in stepping from Class A into higher risk posture classifications will be discussed. The talk will conclude with a discussion about risk-based safety and mission assuranceat GSFC.

  11. Geographical classification of apple based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Zhao, Chunjiang; Peng, Yankun

    2013-05-01

    Attribute of apple according to geographical origin is often recognized and appreciated by the consumers. It is usually an important factor to determine the price of a commercial product. Hyperspectral imaging technology and supervised pattern recognition was attempted to discriminate apple according to geographical origins in this work. Hyperspectral images of 207 Fuji apple samples were collected by hyperspectral camera (400-1000nm). Principal component analysis (PCA) was performed on hyperspectral imaging data to determine main efficient wavelength images, and then characteristic variables were extracted by texture analysis based on gray level co-occurrence matrix (GLCM) from dominant waveband image. All characteristic variables were obtained by fusing the data of images in efficient spectra. Support vector machine (SVM) was used to construct the classification model, and showed excellent performance in classification results. The total classification rate had the high classify accuracy of 92.75% in the training set and 89.86% in the prediction sets, respectively. The overall results demonstrated that the hyperspectral imaging technique coupled with SVM classifier can be efficiently utilized to discriminate Fuji apple according to geographical origins.

  12. Cloud detection and classification based on MAX-DOAS observations

    NASA Astrophysics Data System (ADS)

    Wagner, T.; Beirle, S.; Dörner, S.; Friess, U.; Remmers, J.; Shaiganfar, R.

    2013-12-01

    Multi-AXis-Differential Optical Absorption Spectroscopy (MAX-DOAS) observations of aerosols and trace gases can be strongly influenced by clouds. Thus it is important to identify clouds and characterise their properties. In this study we investigate the effects of clouds on several quantities which can be derived from MAX-DOAS observations, like the radiance, the colour index (radiance ratio at two selected wavelengths), the absorption of the oxygen dimer O4 and the fraction of inelastically scattered light (Ring effect). To identify clouds, these quantities can be either compared to their corresponding clear sky reference values, or their dependencies on time or viewing direction can be analysed. From the investigation of the temporal variability the influence of clouds can be identified even for individual measurements. Based on our investigations we developed a cloud classification scheme, which can be applied in a flexible way to MAX-DOAS or zenith DOAS observations: in its simplest version, zenith observations of the colour index are used to identify the presence of clouds (or high aerosol load). In more sophisticated versions, also other quantities and viewing directions are considered, which allows sub-classifications like e.g. thin or thick clouds, or fog. We applied our cloud classification scheme to MAX-DOAS observations during the CINDI campaign in the Netherlands in Summer 2009 and found very good agreement with sky images taken from ground.

  13. Fruit classification based on weighted score-level feature fusion

    NASA Astrophysics Data System (ADS)

    Kuang, Hulin; Hang Chan, Leanne Lai; Liu, Cairong; Yan, Hong

    2016-01-01

    We describe an object classification method based on weighted score-level feature fusion using learned weights. Our method is able to recognize 20 object classes in a customized fruit dataset. Although the fusion of multiple features is commonly used to distinguish variable object classes, the optimal combination of features is not well defined. Moreover, in these methods, most parameters used for feature extraction are not optimized and the contribution of each feature to an individual class is not considered when determining the weight of the feature. Our algorithm relies on optimizing a single feature during feature selection and learning the weight of each feature for an individual class from the training data using a linear support vector machine before the features are linearly combined with the weights at the score level. The optimal single feature is selected using cross-validation. The optimal combination of features is explored and tested experimentally using a customized fruit dataset with 20 object classes and a variety of complex backgrounds. The experiment results show that the proposed feature fusion method outperforms four state-of-the-art fruit classification algorithms and improves the classification accuracy when compared with some state-of-the-art feature fusion methods.

  14. Doppler Feature Based Classification of Wind Profiler Data

    NASA Astrophysics Data System (ADS)

    Sinha, Swati; Chandrasekhar Sarma, T. V.; Lourde. R, Mary

    2017-01-01

    Wind Profilers (WP) are coherent pulsed Doppler radars in UHF and VHF bands. They are used for vertical profiling of wind velocity and direction. This information is very useful for weather modeling, study of climatic patterns and weather prediction. Observations at different height and different wind velocities are possible by changing the operating parameters of WP. A set of Doppler power spectra is the standard form of WP data. Wind velocity, direction and wind velocity turbulence at different heights can be derived from it. Modern wind profilers operate for long duration and generate approximately 4 megabytes of data per hour. The radar data stream contains Doppler power spectra from different radar configurations with echoes from different atmospheric targets. In order to facilitate systematic study, this data needs to be segregated according the type of target. A reliable automated target classification technique is required to do this job. Classical techniques of radar target identification use pattern matching and minimization of mean squared error, Euclidean distance etc. These techniques are not effective for the classification of WP echoes, as these targets do not have well-defined signature in Doppler power spectra. This paper presents an effective target classification technique based on range-Doppler features.

  15. Case-based statistical learning applied to SPECT image classification

    NASA Astrophysics Data System (ADS)

    Górriz, Juan M.; Ramírez, Javier; Illán, I. A.; Martínez-Murcia, Francisco J.; Segovia, Fermín.; Salas-Gonzalez, Diego; Ortiz, A.

    2017-03-01

    Statistical learning and decision theory play a key role in many areas of science and engineering. Some examples include time series regression and prediction, optical character recognition, signal detection in communications or biomedical applications for diagnosis and prognosis. This paper deals with the topic of learning from biomedical image data in the classification problem. In a typical scenario we have a training set that is employed to fit a prediction model or learner and a testing set on which the learner is applied to in order to predict the outcome for new unseen patterns. Both processes are usually completely separated to avoid over-fitting and due to the fact that, in practice, the unseen new objects (testing set) have unknown outcomes. However, the outcome yields one of a discrete set of values, i.e. the binary diagnosis problem. Thus, assumptions on these outcome values could be established to obtain the most likely prediction model at the training stage, that could improve the overall classification accuracy on the testing set, or keep its performance at least at the level of the selected statistical classifier. In this sense, a novel case-based learning (c-learning) procedure is proposed which combines hypothesis testing from a discrete set of expected outcomes and a cross-validated classification stage.

  16. Spectrum-based kernel length estimation for Gaussian process classification.

    PubMed

    Wang, Liang; Li, Chuan

    2014-06-01

    Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.

  17. Agent-based services for B2B electronic commerce

    NASA Astrophysics Data System (ADS)

    Fong, Elizabeth; Ivezic, Nenad; Rhodes, Tom; Peng, Yun

    2000-12-01

    The potential of agent-based systems has not been realized yet, in part, because of the lack of understanding of how the agent technology supports industrial needs and emerging standards. The area of business-to-business electronic commerce (b2b e-commerce) is one of the most rapidly developing sectors of industry with huge impact on manufacturing practices. In this paper, we investigate the current state of agent technology and the feasibility of applying agent-based computing to b2b e-commerce in the circuit board manufacturing sector. We identify critical tasks and opportunities in the b2b e-commerce area where agent-based services can best be deployed. We describe an implemented agent-based prototype system to facilitate the bidding process for printed circuit board manufacturing and assembly. These activities are taking place within the Internet Commerce for Manufacturing (ICM) project, the NIST- sponsored project working with industry to create an environment where small manufacturers of mechanical and electronic components may participate competitively in virtual enterprises that manufacture printed circuit assemblies.

  18. Agent-Based Distributed Data Mining: A Survey

    NASA Astrophysics Data System (ADS)

    Moemeng, Chayapol; Gorodetsky, Vladimir; Zuo, Ziye; Yang, Yong; Zhang, Chengqi

    Distributed data mining is originated from the need of mining over decentralised data sources. Data mining techniques involving in such complex environment must encounter great dynamics due to changes in the system can affect the overall performance of the system. Agent computing whose aim is to deal with complex systems has revealed opportunities to improve distributed data mining systems in a number of ways. This paper surveys the integration of multi-agent system and distributed data mining, also known as agent-based distributed data mining, in terms of significance, system overview, existing systems, and research trends.

  19. Replication Based on Role Concept for Multi-Agent Systems

    NASA Astrophysics Data System (ADS)

    Bora, Sebnem; Dikenelli, Oguz

    Replication is widely used to improve fault tolerance in distributed and multi-agent systems. In this paper, we present a different point of view on replication in multi-agent systems. The approach we propose is based on role concept. We define a specific "fault tolerant role" which encapsulates all behaviors related to replication-based fault tolerance in this work. Our strategy is mainly focused on replicating instances of critical roles in the agent organization. However, while doing this, we simply transfer the critical role and the fault tolerant role to appropriate agents. Here, the fault tolerant role is responsible for coordination between replicated role instances (replicas). Moreover, our approach is flexible in terms of fault tolerance since it is possible to easily modify existing behaviors of the "fault tolerant" role, remove some of its behaviors, or include new behaviors to it due to its characteristic architecture.

  20. Inorganic nanoparticle-based contrast agents for molecular imaging

    PubMed Central

    Cho, Eun Chul; Glaus, Charles; Chen, Jingyi; Welch, Michael J.; Xia, Younan

    2010-01-01

    Inorganic nanoparticles including semiconductor quantum dots, iron oxide nanoparticles, and gold nanoparticles have been developed as contrast agents for diagnostics by molecular imaging. Compared to traditional contrast agents, nanoparticles offer several advantages: their optical and magnetic properties can be tailored by engineering the composition, structure, size, and shape; their surfaces can be modified with ligands to target specific biomarkers of disease; the contrast enhancement provided can be equivalent to millions of molecular counterparts; and they can be integrated with a combination of different functions for multi-modal imaging. Here, we review recent advances in the development of contrast agents based on inorganic nanoparticles for molecular imaging, with a touch on contrast enhancement, surface modification, tissue targeting, clearance, and toxicity. As research efforts intensify, contrast agents based on inorganic nanoparticles that are highly sensitive, target-specific, and safe to use are expected to enter clinical applications in the near future. PMID:21074494

  1. Tutorial on agent-based modeling and simulation.

    SciTech Connect

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2005-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS is a third way of doing science besides deductive and inductive reasoning. Computational advances have made possible a growing number of agent-based applications in a variety of fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling consumer behavior to understanding the fall of ancient civilizations, to name a few. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing ABMS models, and provides some thoughts on the relationship between ABMS and traditional modeling techniques.

  2. Agent-Based Service Composition in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Gutierrez-Garcia, J. Octavio; Sim, Kwang-Mong

    In a Cloud-computing environment, consumers, brokers, and service providers interact to achieve their individual purposes. In this regard, service providers offer a pool of resources wrapped as web services, which should be composed by broker agents to provide a single virtualized service to Cloud consumers. In this study, an agent-based test bed for simulating Cloud-computing environments is developed. Each Cloud participant is represented by an agent, whose behavior is defined by means of colored Petri nets. The relationship between web services and service providers is modeled using object Petri nets. Both Petri net formalisms are combined to support a design methodology for defining concurrent and parallel service choreographies. This results in the creation of a dynamic agent-based service composition algorithm. The simulation results indicate that service composition is achieved with a linear time complexity despite dealing with interleaving choreographies and synchronization of heterogeneous services.

  3. A Multiagent Recommender System with Task-Based Agent Specialization

    NASA Astrophysics Data System (ADS)

    Lorenzi, Fabiana; Correa, Fabio Arreguy Camargo; Bazzan, Ana L. C.; Abel, Mara; Ricci, Francesco

    This paper describes a multiagent recommender system where agents maintain local knowledge bases and, when requested to support a travel planning task, they collaborate exchanging information stored in their local bases. A request for a travel recommendation is decomposed by the system into sub tasks, corresponding to travel services. Agents select tasks autonomously, and accomplish them with the help of the knowledge derived from previous solutions. In the proposed architecture, agents become experts in some task types, and this makes the recommendation generation more efficient. In this paper, we validate the model via simulations where agents collaborate to recommend a travel package to the user. The experiments show that specialization is useful hence providing a validation of the proposed model.

  4. Rough set theory based prognostic classification models for hospice referral.

    PubMed

    Gil-Herrera, Eleazar; Aden-Buie, Garrick; Yalcin, Ali; Tsalatsanis, Athanasios; Barnes, Laura E; Djulbegovic, Benjamin

    2015-11-25

    This paper explores and evaluates the application of classical and dominance-based rough set theory (RST) for the development of data-driven prognostic classification models for hospice referral. In this work, rough set based models are compared with other data-driven methods with respect to two factors related to clinical credibility: accuracy and accessibility. Accessibility refers to the ability of the model to provide traceable, interpretable results and use data that is relevant and simple to collect. We utilize retrospective data from 9,103 terminally ill patients to demonstrate the design and implementation RST- based models to identify potential hospice candidates. The classical rough set approach (CRSA) provides methods for knowledge acquisition, founded on the relational indiscernibility of objects in a decision table, to describe required conditions for membership in a concept class. On the other hand, the dominance-based rough set approach (DRSA) analyzes information based on the monotonic relationships between condition attributes values and their assignment to the decision class. CRSA decision rules for six-month patient survival classification were induced using the MODLEM algorithm. Dominance-based decision rules were extracted using the VC-DomLEM rule induction algorithm. The RST-based classifiers are compared with other predictive and rule based decision modeling techniques, namely logistic regression, support vector machines, random forests and C4.5. The RST-based classifiers demonstrate average AUC of 69.74 % with MODLEM and 71.73 % with VC-DomLEM, while the compared methods achieve average AUC of 74.21 % for logistic regression, 73.52 % for support vector machines, 74.59 % for random forests, and 70.88 % for C4.5. This paper contributes to the growing body of research in RST-based prognostic models. RST and its extensions posses features that enhance the accessibility of clinical decision support models. While the non-rule-based methods

  5. Classification of periodontal diseases: the dilemma continues.

    PubMed

    Devi, Prapulla; Pradeep, A R

    2009-01-01

    Classification is the systematic separation and organization of knowledge about diseases. Even though it is ideal that classification of periodontal diseases be based solely on etiologic agents, it is not always practical, since many factors influence the manifestations of periodontal disease. Until recently, the 1989 American Academy of Periodontology classification system was used. However, this classification system was soon criticized because of its drawbacks. In the 1999 world workshop, the classification was revised, and an elaborate new classification system was agreed upon. This paper reviews the current literature and compiles the views of various authors regarding the 1989 and 1999 world workshop classifications.

  6. Behavior-Based Language Generation for Believable Agents,

    DTIC Science & Technology

    1995-03-01

    further bring out some of the requirements on believable language and action producing agents, let us examine four seconds from the film Casablanca [5...could be extended to support language generation. Second , the extensions to Hap to support language generation might be useful in expressing...Behavior-based Language Generation for Believable Agents A. Bryan Loyall Joseph Bates March 1995 CMU-CS-95-139 School of Computer Science

  7. Tutorial on Agent-based Modeling and Simulation

    DTIC Science & Technology

    2007-06-01

    World of Science. New York: Wiley Crichton , Michael , 2002, Prey, HarperCollins. Epstein JM, Axtell R. 1996. Growing Artificial Societies...other author(s): Michael J. North and Charles M. Macal Principal Author’s Organization and address: Argonne National Laboratory 9700 S. Cass Avenue...on Agent-based Modeling and Simulation Michael J. North and Charles M. Macal Center for Complex Adaptive Agent Systems Simulation (CAS2) Decision

  8. Evaluation of development prospects of renewable energy: agent based modelling

    NASA Astrophysics Data System (ADS)

    Klevakina, E. A.; Zabelina, I. A.; Murtazina, M. Sh

    2017-01-01

    The paper describes the agent-based model usage to evaluate the dynamics and the perspectives of alternative energy adopting in the Eastern regions of Russia. It includes a brief review of the agent-based models that can be used for estimation of alternatives in the process of transition to “green” economics. The authors show that active usage of solar energy in Russia is possible at the rural household level, when the climate conditions are appropriate. Adoption of solar energy sources decreases the energy production based on the conventional sources and improves the quality of environment in the regions. A complex regional multi-agent model is considered in this paper. The model consists of several private models and uses GIS technologies. These private models are a demographic and migration model of the region and a diffusion of the innovations model. In these models, agents are humans who live within the boundaries of the agents-municipalities, and agents as well are large-scale producers of electricity that pollutes the environment. Such a structure allows us to determine the changes in the demand for electricity generated by traditional sources. A simulation software will assist to identify the opportunities for implementation of alternative energy sources in the Eastern regions of Russia.

  9. Agent-based modeling and simulation Part 3 : desktop ABMS.

    SciTech Connect

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2007-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS 'is a third way of doing science,' in addition to traditional deductive and inductive reasoning (Axelrod 1997b). Computational advances have made possible a growing number of agent-based models across a variety of application domains. Applications range from modeling agent behavior in the stock market, supply chains, and consumer markets, to predicting the spread of epidemics, the threat of bio-warfare, and the factors responsible for the fall of ancient civilizations. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing agent models, and illustrates the development of a simple agent-based model of shopper behavior using spreadsheets.

  10. Application of Bayesian Classification to Content-Based Data Management

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  11. Application of Bayesian Classification to Content-Based Data Management

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  12. Safety aspects in biotechnology. Classifications and safety precautions for handling of biological agents.

    PubMed

    Frommer, W; Krämer, P

    1990-07-01

    The term "biotechnology" is today used much more widely than 10 years ago. According to the modern definition, biotechnology represents the "conveyor belt" which brings advances in the fields of molecular biology, cell biology, molecular genetics, microbiology, biochemistry and process engineering, etc., into the areas of application. It is attempted to indicate the development of safety standards concerning biotechnology. This development is in a state of flux, and the finding that the risks in handling r-DNA organisms are not larger than those arising when handling the known pathogens is becoming more accepted. Accordingly, these r-DNA organisms can also be classified into the known risk groups I-IV and handled under the corresponding safety conditions according to this classification: In the laboratory under the laboratory safety measures L1-L4 described in the BMFT-Guidelines or guidelines for occupational health and hygiene (UVV Biotechnologie) and on a process scale under the process safety measures described in the OECD report. The discussion of aspects on waste disposal, education/training and public perception in the field of biological safety completes the report.

  13. S1 gene-based phylogeny of infectious bronchitis virus: An attempt to harmonize virus classification.

    PubMed

    Valastro, Viviana; Holmes, Edward C; Britton, Paul; Fusaro, Alice; Jackwood, Mark W; Cattoli, Giovanni; Monne, Isabella

    2016-04-01

    Infectious bronchitis virus (IBV) is the causative agent of a highly contagious disease that results in severe economic losses to the global poultry industry. The virus exists in a wide variety of genetically distinct viral types, and both phylogenetic analysis and measures of pairwise similarity among nucleotide or amino acid sequences have been used to classify IBV strains. However, there is currently no consensus on the method by which IBV sequences should be compared, and heterogeneous genetic group designations that are inconsistent with phylogenetic history have been adopted, leading to the confusing coexistence of multiple genotyping schemes. Herein, we propose a simple and repeatable phylogeny-based classification system combined with an unambiguous and rationale lineage nomenclature for the assignment of IBV strains. By using complete nucleotide sequences of the S1 gene we determined the phylogenetic structure of IBV, which in turn allowed us to define 6 genotypes that together comprise 32 distinct viral lineages and a number of inter-lineage recombinants. Because of extensive rate variation among IBVs, we suggest that the inference of phylogenetic relationships alone represents a more appropriate criterion for sequence classification than pairwise sequence comparisons. The adoption of an internationally accepted viral nomenclature is crucial for future studies of IBV epidemiology and evolution, and the classification scheme presented here can be updated and revised novel S1 sequences should become available. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Classification of lymphatic-system malformations in primary lymphoedema based on MR lymphangiography.

    PubMed

    Liu, N F; Yan, Z X; Wu, X F

    2012-09-01

    The study aims to investigate lymphatic-system malformations and proposes a classification of primary lymphoedema based on comprehensive imaging data of both lymph vessel- and lymph-node abnormalities. A total of 378 patients with primary lymphoedema of the lower extremity were examined with magnetic resonance lymphangiography (MRL) using gadobenate dimeglumine as contrast agent. Lymph vessels and drainage lymph nodes were evaluated, leading to the proposal of the classification of primary lymphoedema and the relative proportions. A total of 63 (17%) patients exhibited defects of the inguinal lymph nodes with mild or moderate dilatation of afferent lymph vessels. A total of 123 (32%) patients exhibited lymphatic anomalies as lymphatic aplasia, hypoplasia or hyperplasia with no obvious defect of the drainage lymph nodes. The involvement of both lymph vessel- and lymph-node abnormalities in the affected limb was found in 192 (51%) patients. The primary lymphoedema was classified as three major types as: (1) lymph nodes affected only; (2) lymph vessel affected only with three subtypes and (3) both lymph vessel and lymph node affected with subgroups. A comprehensive classification of lymphatic-system malformation in primary lymphoedema is proposed, which clearly defines the location and pathologic characteristics of both lymphatics and lymph node and may lead to further study of the aetiology as well as rational treatment of the disease. Copyright © 2012 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  15. Agent-Based Simulations for Project Management

    NASA Technical Reports Server (NTRS)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  16. Agents and Data Mining in Bioinformatics: Joining Data Gathering and Automatic Annotation with Classification and Distributed Clustering

    NASA Astrophysics Data System (ADS)

    Bazzan, Ana L. C.

    Multiagent systems and data mining techniques are being frequently used in genome projects, especially regarding the annotation process (annotation pipeline). This paper discusses annotation-related problems where agent-based and/or distributed data mining has been successfully employed.

  17. A Max-Margin Perspective on Sparse Representation-Based Classification

    DTIC Science & Technology

    2013-11-30

    ABSTRACT 16. SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY...Perspective on Sparse Representation-Based Classification Sparse Representation-based Classification (SRC) is a powerful tool in distinguishing signal...a reconstructive perspective, which neither offer- s any guarantee on its classification performance nor pro- The views, opinions and/or findings

  18. An immunity-based anomaly detection system with sensor agents.

    PubMed

    Okamoto, Takeshi; Ishida, Yoshiteru

    2009-01-01

    This paper proposes an immunity-based anomaly detection system with sensor agents based on the specificity and diversity of the immune system. Each agent is specialized to react to the behavior of a specific user. Multiple diverse agents decide whether the behavior is normal or abnormal. Conventional systems have used only a single sensor to detect anomalies, while the immunity-based system makes use of multiple sensors, which leads to improvements in detection accuracy. In addition, we propose an evaluation framework for the anomaly detection system, which is capable of evaluating the differences in detection accuracy between internal and external anomalies. This paper focuses on anomaly detection in user's command sequences on UNIX-like systems. In experiments, the immunity-based system outperformed some of the best conventional systems.

  19. An Immunity-Based Anomaly Detection System with Sensor Agents

    PubMed Central

    Okamoto, Takeshi; Ishida, Yoshiteru

    2009-01-01

    This paper proposes an immunity-based anomaly detection system with sensor agents based on the specificity and diversity of the immune system. Each agent is specialized to react to the behavior of a specific user. Multiple diverse agents decide whether the behavior is normal or abnormal. Conventional systems have used only a single sensor to detect anomalies, while the immunity-based system makes use of multiple sensors, which leads to improvements in detection accuracy. In addition, we propose an evaluation framework for the anomaly detection system, which is capable of evaluating the differences in detection accuracy between internal and external anomalies. This paper focuses on anomaly detection in user's command sequences on UNIX-like systems. In experiments, the immunity-based system outperformed some of the best conventional systems. PMID:22291560

  20. Nanochemistry of protein-based delivery agents

    NASA Astrophysics Data System (ADS)

    Rajendran, Subin; Udenigwe, Chibuike; Yada, Rickey

    2016-07-01

    The past decade has seen an increased interest in the conversion of food proteins into functional biomaterials, including their use for loading and delivery of physiologically active compounds such as nutraceuticals and pharmaceuticals. Proteins possess a competitive advantage over other platforms for the development of nanodelivery systems since they are biocompatible, amphipathic, and widely available. Proteins also have unique molecular structures and diverse functional groups that can be selectively modified to alter encapsulation and release properties. A number of physical and chemical methods have been used for preparing protein nanoformulations, each based on different underlying protein chemistry. This review focuses on the chemistry of the reorganization and/or modification of proteins into functional nanostructures for delivery, from the perspective of their preparation, functionality, stability and physiological behavior.

  1. Nanochemistry of Protein-Based Delivery Agents

    PubMed Central

    Rajendran, Subin R. C. K.; Udenigwe, Chibuike C.; Yada, Rickey Y.

    2016-01-01

    The past decade has seen an increased interest in the conversion of food proteins into functional biomaterials, including their use for loading and delivery of physiologically active compounds such as nutraceuticals and pharmaceuticals. Proteins possess a competitive advantage over other platforms for the development of nanodelivery systems since they are biocompatible, amphipathic, and widely available. Proteins also have unique molecular structures and diverse functional groups that can be selectively modified to alter encapsulation and release properties. A number of physical and chemical methods have been used for preparing protein nanoformulations, each based on different underlying protein chemistry. This review focuses on the chemistry of the reorganization and/or modification of proteins into functional nanostructures for delivery, from the perspective of their preparation, functionality, stability and physiological behavior. PMID:27489854

  2. Nanochemistry of Protein-Based Delivery Agents.

    PubMed

    Rajendran, Subin R C K; Udenigwe, Chibuike C; Yada, Rickey Y

    2016-01-01

    The past decade has seen an increased interest in the conversion of food proteins into functional biomaterials, including their use for loading and delivery of physiologically active compounds such as nutraceuticals and pharmaceuticals. Proteins possess a competitive advantage over other platforms for the development of nanodelivery systems since they are biocompatible, amphipathic, and widely available. Proteins also have unique molecular structures and diverse functional groups that can be selectively modified to alter encapsulation and release properties. A number of physical and chemical methods have been used for preparing protein nanoformulations, each based on different underlying protein chemistry. This review focuses on the chemistry of the reorganization and/or modification of proteins into functional nanostructures for delivery, from the perspective of their preparation, functionality, stability and physiological behavior.

  3. Evaluating Water Demand Using Agent-Based Modeling

    NASA Astrophysics Data System (ADS)

    Lowry, T. S.

    2004-12-01

    The supply and demand of water resources are functions of complex, inter-related systems including hydrology, climate, demographics, economics, and policy. To assess the safety and sustainability of water resources, planners often rely on complex numerical models that relate some or all of these systems using mathematical abstractions. The accuracy of these models relies on how well the abstractions capture the true nature of the systems interactions. Typically, these abstractions are based on analyses of observations and/or experiments that account only for the statistical mean behavior of each system. This limits the approach in two important ways: 1) It cannot capture cross-system disruptive events, such as major drought, significant policy change, or terrorist attack, and 2) it cannot resolve sub-system level responses. To overcome these limitations, we are developing an agent-based water resources model that includes the systems of hydrology, climate, demographics, economics, and policy, to examine water demand during normal and extraordinary conditions. Agent-based modeling (ABM) develops functional relationships between systems by modeling the interaction between individuals (agents), who behave according to a probabilistic set of rules. ABM is a "bottom-up" modeling approach in that it defines macro-system behavior by modeling the micro-behavior of individual agents. While each agent's behavior is often simple and predictable, the aggregate behavior of all agents in each system can be complex, unpredictable, and different than behaviors observed in mean-behavior models. Furthermore, the ABM approach creates a virtual laboratory where the effects of policy changes and/or extraordinary events can be simulated. Our model, which is based on the demographics and hydrology of the Middle Rio Grande Basin in the state of New Mexico, includes agent groups of residential, agricultural, and industrial users. Each agent within each group determines its water usage

  4. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  5. Classification Based on Hierarchical Linear Models: The Need for Incorporation of Social Contexts in Classification Analysis

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Qui

    2009-01-01

    Many areas in educational and psychological research involve the use of classification statistical analysis. For example, school districts might be interested in attaining variables that provide optimal prediction of school dropouts. In psychology, a researcher might be interested in the classification of a subject into a particular psychological…

  6. Set-Based Discriminative Measure for Electrocardiogram Beat Classification

    PubMed Central

    Li, Wei; Li, Jianqing; Qin, Qin

    2017-01-01

    Computer aided diagnosis systems can help to reduce the high mortality rate among cardiac patients. Automatical classification of electrocardiogram (ECG) beats plays an important role in such systems, but this issue is challenging because of the complexities of ECG signals. In literature, feature designing has been broadly-studied. However, such methodology is inevitably limited by the heuristics of hand-crafting process and the challenge of signals themselves. To address it, we treat the problem of ECG beat classification from the metric and measurement perspective. We propose a novel approach, named “Set-Based Discriminative Measure”, which first learns a discriminative metric space to ensure that intra-class distances are smaller than inter-class distances for ECG features in a global way, and then measures a new set-based dissimilarity in such learned space to cope with the local variation of samples. Experimental results have demonstrated the advantage of this approach in terms of effectiveness, robustness, and flexibility based on ECG beats from the MIT-BIH Arrhythmia Database. PMID:28125072

  7. Automatic classification of sentences to support Evidence Based Medicine.

    PubMed

    Kim, Su Nam; Martinez, David; Cavedon, Lawrence; Yencken, Lars

    2011-03-29

    Given a set of pre-defined medical categories used in Evidence Based Medicine, we aim to automatically annotate sentences in medical abstracts with these labels. We constructed a corpus of 1,000 medical abstracts annotated by hand with specified medical categories (e.g. Intervention, Outcome). We explored the use of various features based on lexical, semantic, structural, and sequential information in the data, using Conditional Random Fields (CRF) for classification. For the classification tasks over all labels, our systems achieved micro-averaged f-scores of 80.9% and 66.9% over datasets of structured and unstructured abstracts respectively, using sequential features. In labeling only the key sentences, our systems produced f-scores of 89.3% and 74.0% over structured and unstructured abstracts respectively, using the same sequential features. The results over an external dataset were lower (f-scores of 63.1% for all labels, and 83.8% for key sentences). Of the features we used, the best for classifying any given sentence in an abstract were based on unigrams, section headings, and sequential information from preceding sentences. These features resulted in improved performance over a simple bag-of-words approach, and outperformed feature sets used in previous work.

  8. Feature selection gait-based gender classification under different circumstances

    NASA Astrophysics Data System (ADS)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  9. Nonlinear scaling analysis approach of agent-based Potts financial dynamical model.

    PubMed

    Hong, Weijia; Wang, Jun

    2014-12-01

    A financial agent-based price model is developed and investigated by one of statistical physics dynamic systems-the Potts model. Potts model, a generalization of the Ising model to more than two components, is a model of interacting spins on a crystalline lattice which describes the interaction strength among the agents. In this work, we investigate and analyze the correlation behavior of normalized returns of the proposed financial model by the power law classification scheme analysis and the empirical mode decomposition analysis. Moreover, the daily returns of Shanghai Composite Index and Shenzhen Component Index are considered, and the comparison nonlinear analysis of statistical behaviors of returns between the actual data and the simulation data is exhibited.

  10. Agent-based simulation of a financial market

    NASA Astrophysics Data System (ADS)

    Raberto, Marco; Cincotti, Silvano; Focardi, Sergio M.; Marchesi, Michele

    2001-10-01

    This paper introduces an agent-based artificial financial market in which heterogeneous agents trade one single asset through a realistic trading mechanism for price formation. Agents are initially endowed with a finite amount of cash and a given finite portfolio of assets. There is no money-creation process; the total available cash is conserved in time. In each period, agents make random buy and sell decisions that are constrained by available resources, subject to clustering, and dependent on the volatility of previous periods. The model proposed herein is able to reproduce the leptokurtic shape of the probability density of log price returns and the clustering of volatility. Implemented using extreme programming and object-oriented technology, the simulator is a flexible computational experimental facility that can find applications in both academic and industrial research projects.

  11. An Agent-Based Intelligent CAD Platform for Collaborative Design

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Cui, Xingran; Hu, Xiuyin

    Collaborative design can create added value in the design and production process by bringing the benefit of team work and cooperation in a concurrent and coordinated manner. However, distributed design knowledge and product data make the design process cumbersome. To facilitate collaborative design, an agent-based intelligent CAD platform is implemented. Intelligent agents are applied to the collaborative design. Adopting the JADE platform as framework, an intelligent collaborative design software (Co-Cad platform for short) is designed. In this platform, every man, design software, management software, equipment and resource is regarded as a single agent, the legacy design can be abstracted to be interaction between agents. Multimedia technology is integrated into Co-Cad platform, communication and identity authentication among collaborative designers from different areas are more convenient. Finally,an instance of collaborative design using Co-Cad platform is presented.

  12. Interannual rainfall variability and SOM-based circulation classification

    NASA Astrophysics Data System (ADS)

    Wolski, Piotr; Jack, Christopher; Tadross, Mark; van Aardenne, Lisa; Lennard, Christopher

    2017-03-01

    Self-Organizing Maps (SOM) based classifications of synoptic circulation patterns are increasingly being used to interpret large-scale drivers of local climate variability, and as part of statistical downscaling methodologies. These applications rely on a basic premise of synoptic climatology, i.e. that local weather is conditioned by the large-scale circulation. While it is clear that this relationship holds in principle, the implications of its implementation through SOM-based classification, particularly at interannual and longer time scales, are not well recognized. Here we use a SOM to understand the interannual synoptic drivers of climate variability at two locations in the winter and summer rainfall regimes of South Africa. We quantify the portion of variance in seasonal rainfall totals that is explained by year to year differences in the synoptic circulation, as schematized by a SOM. We furthermore test how different spatial domain sizes and synoptic variables affect the ability of the SOM to capture the dominant synoptic drivers of interannual rainfall variability. Additionally, we identify systematic synoptic forcing that is not captured by the SOM classification. The results indicate that the frequency of synoptic states, as schematized by a relatively disaggregated SOM (7 × 9) of prognostic atmospheric variables, including specific humidity, air temperature and geostrophic winds, captures only 20-45% of interannual local rainfall variability, and that the residual variance contains a strong systematic component. Utilising a multivariate linear regression framework demonstrates that this residual variance can largely be explained using synoptic variables over a particular location; even though they are used in the development of the SOM their influence, however, diminishes with the size of the SOM spatial domain. The influence of the SOM domain size, the choice of SOM atmospheric variables and grid-point explanatory variables on the levels of explained

  13. Scene classification of infrared images based on texture feature

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    Scene Classification refers to as assigning a physical scene into one of a set of predefined categories. Utilizing the method texture feature is good for providing the approach to classify scenes. Texture can be considered to be repeating patterns of local variation of pixel intensities. And texture analysis is important in many applications of computer image analysis for classification or segmentation of images based on local spatial variations of intensity. Texture describes the structural information of images, so it provides another data to classify comparing to the spectrum. Now, infrared thermal imagers are used in different kinds of fields. Since infrared images of the objects reflect their own thermal radiation, there are some shortcomings of infrared images: the poor contrast between the objectives and background, the effects of blurs edges, much noise and so on. Because of these shortcomings, it is difficult to extract to the texture feature of infrared images. In this paper we have developed an infrared image texture feature-based algorithm to classify scenes of infrared images. This paper researches texture extraction using Gabor wavelet transform. The transformation of Gabor has excellent capability in analysis the frequency and direction of the partial district. Gabor wavelets is chosen for its biological relevance and technical properties In the first place, after introducing the Gabor wavelet transform and the texture analysis methods, the infrared images are extracted texture feature by Gabor wavelet transform. It is utilized the multi-scale property of Gabor filter. In the second place, we take multi-dimensional means and standard deviation with different scales and directions as texture parameters. The last stage is classification of scene texture parameters with least squares support vector machine (LS-SVM) algorithm. SVM is based on the principle of structural risk minimization (SRM). Compared with SVM, LS-SVM has overcome the shortcoming of

  14. A technology path to tactical agent-based modeling

    NASA Astrophysics Data System (ADS)

    James, Alex; Hanratty, Timothy P.

    2017-05-01

    Wargaming is a process of thinking through and visualizing events that could occur during a possible course of action. Over the past 200 years, wargaming has matured into a set of formalized processes. One area of growing interest is the application of agent-based modeling. Agent-based modeling and its additional supporting technologies has potential to introduce a third-generation wargaming capability to the Army, creating a positive overmatch decision-making capability. In its simplest form, agent-based modeling is a computational technique that helps the modeler understand and simulate how the "whole of a system" responds to change over time. It provides a decentralized method of looking at situations where individual agents are instantiated within an environment, interact with each other, and empowered to make their own decisions. However, this technology is not without its own risks and limitations. This paper explores a technology roadmap, identifying research topics that could realize agent-based modeling within a tactical wargaming context.

  15. [Gadolinium-based contrast agents for magnetic resonance imaging].

    PubMed

    Carrasco Muñoz, S; Calles Blanco, C; Marcin, Javier; Fernández Álvarez, C; Lafuente Martínez, J

    2014-06-01

    Gadolinium-based contrast agents are increasingly being used in magnetic resonance imaging. These agents can improve the contrast in images and provide information about function and metabolism, increasing both sensitivity and specificity. We describe the gadolinium-based contrast agents that have been approved for clinical use, detailing their main characteristics based on their chemical structure, stability, and safety. In general terms, these compounds are safe. Nevertheless, adverse reactions, the possibility of nephrotoxicity from these compounds, and the possibility of developing nephrogenic systemic fibrosis will be covered in this article. Lastly, the article will discuss the current guidelines, recommendations, and contraindications for their clinical use, including the management of pregnant and breast-feeding patients.

  16. Manganese-based MRI contrast agents: past, present and future

    PubMed Central

    Pan, Dipanjan; Schmieder, Anne H.; Wickline, Samuel A.; Lanza, Gregory M.

    2011-01-01

    Paramagnetic and superparamagnetic metals are used as contrast materials for magnetic resonance (MR) based techniques. Lanthanide metal gadolinium (Gd) has been the most widely explored, predominant paramagnetic contrast agent until the discovery and association of the metal with nephrogenic systemic fibrosis (NSF), a rare but serious side effects in patients with renal or kidney problems. Manganese was one of the earliest reported examples of paramagnetic contrast material for MRI because of its efficient positive contrast enhancement. In this review, manganese based contrast agent approaches are discussed with a particular emphasis on their synthetic approaches. Both small molecules based typical blood pool contrast agents and more recently developed novel nanometer sized materials are reviewed focusing on a number of successful molecular imaging examples. PMID:22043109

  17. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  18. Online Classification of Contaminants Based on Multi-Classification Support Vector Machine Using Conventional Water Quality Sensors

    PubMed Central

    Huang, Pingjie; Jin, Yu; Hou, Dibo; Yu, Jie; Tu, Dezhan; Cao, Yitong; Zhang, Guangxin

    2017-01-01

    Water quality early warning system is mainly used to detect deliberate or accidental water pollution events in water distribution systems. Identifying the types of pollutants is necessary after detecting the presence of pollutants to provide warning information about pollutant characteristics and emergency solutions. Thus, a real-time contaminant classification methodology, which uses the multi-classification support vector machine (SVM), is proposed in this study to obtain the probability for contaminants belonging to a category. The SVM-based model selected samples with indistinct feature, which were mostly low-concentration samples as the support vectors, thereby reducing the influence of the concentration of contaminants in the building process of a pattern library. The new sample points were classified into corresponding regions after constructing the classification boundaries with the support vector. Experimental results show that the multi-classification SVM-based approach is less affected by the concentration of contaminants when establishing a pattern library compared with the cosine distance classification method. Moreover, the proposed approach avoids making a single decision when classification features are unclear in the initial phase of injecting contaminants. PMID:28335400

  19. Intelligent Agent-Based Intrusion Detection System Using Enhanced Multiclass SVM

    PubMed Central

    Ganapathy, S.; Yogesh, P.; Kannan, A.

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set. PMID:23056036

  20. Intelligent agent-based intrusion detection system using enhanced multiclass SVM.

    PubMed

    Ganapathy, S; Yogesh, P; Kannan, A

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set.

  1. Web entity extraction based on entity attribute classification

    NASA Astrophysics Data System (ADS)

    Li, Chuan-Xi; Chen, Peng; Wang, Ru-Jing; Su, Ya-Ru

    2011-12-01

    The large amount of entity data are continuously published on web pages. Extracting these entities automatically for further application is very significant. Rule-based entity extraction method yields promising result, however, it is labor-intensive and hard to be scalable. The paper proposes a web entity extraction method based on entity attribute classification, which can avoid manual annotation of samples. First, web pages are segmented into different blocks by algorithm Vision-based Page Segmentation (VIPS), and a binary classifier LibSVM is trained to retrieve the candidate blocks which contain the entity contents. Second, the candidate blocks are partitioned into candidate items, and the classifiers using LibSVM are performed for the attributes annotation of the items and then the annotation results are aggregated into an entity. Results show that the proposed method performs well to extract agricultural supply and demand entities from web pages.

  2. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    PubMed Central

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  3. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-08-16

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  4. [Hormone-based classification and therapy concepts in psychiatry].

    PubMed

    Himmerich, H; Steinberg, H

    2011-07-01

    This study retells key aspects of the history of the idea of hormone-based classification and therapy concepts in psychiatry. Different contributions to the history are not only represented from a historical, but also from a current medico-scientific perspective. One of the oldest, yet ethically most problematic, indications concerning hormonal methods to modify undesirable behaviour and sexuality was castration, which was widely used in the 20th century to "cure" homosexuality. Felix Platter, whose concept was humoral-pathological in nature, documented the first postpartum psychosis in the German-speaking countries, the pathogenesis of which according to present-day expertise is brought about by changes in female hormones. The concept of an "endocrine psychiatry" was developed at the beginning of the 20th century. Some protagonists for neuroendocrinology are highlighted, such as Paul Julius Möbius around 1900 or, in the 1950s, Manfred Bleuler, the nestor of this new discipline. Only the discovery of the hormones as such and the development of technologies like radioimmunassay to measure and quantify these hormone changes in mental illnesses allowed investigating these conditions properly. Ever since hormone-based therapeutic and classification concepts have played an important role, above all, in sexual, affective and eating disorders as well as alcohol dependence.

  5. Drunk driving detection based on classification of multivariate time series.

    PubMed

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  6. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  7. Classification of emerald based on multispectral image and PCA

    NASA Astrophysics Data System (ADS)

    Yang, Weiping; Zhao, Dazun; Huang, Qingmei; Ren, Pengyuan; Feng, Jie; Zhang, Xiaoyan

    2005-02-01

    Traditionally, the grade discrimination and classifying of bowlders (emeralds) are implemented by using methods based on people's experiences. In our previous works, a method based on NCS(Natural Color System) color system and sRGB color space conversion is employed for a coarse grade classification of emeralds. However, it is well known that the color match of two colors is not a true "match" unless their spectra are the same. Because metameric colors can not be differentiated by a three channel(RGB) camera, a multispectral camera(MSC) is used as image capturing device in this paper. It consists of a trichromatic digital camera and a set of wide-band filters. The spectra are obtained by measuring a series of natural bowlders(emeralds) samples. Principal component analysis(PCA) method is employed to get some spectral eigenvectors. During the fine classification, the color difference and RMS of spectrum difference between estimated and original spectra are used as criterion. It has been shown that 6 eigenvectors are enough to reconstruct reflection spectra of the testing samples.

  8. ECG-based heartbeat classification for arrhythmia detection: A survey.

    PubMed

    Luz, Eduardo José da S; Schwartz, William Robson; Cámara-Chávez, Guillermo; Menotti, David

    2016-04-01

    An electrocardiogram (ECG) measures the electric activity of the heart and has been widely used for detecting heart diseases due to its simplicity and non-invasive nature. By analyzing the electrical signal of each heartbeat, i.e., the combination of action impulse waveforms produced by different specialized cardiac tissues found in the heart, it is possible to detect some of its abnormalities. In the last decades, several works were developed to produce automatic ECG-based heartbeat classification methods. In this work, we survey the current state-of-the-art methods of ECG-based automated abnormalities heartbeat classification by presenting the ECG signal preprocessing, the heartbeat segmentation techniques, the feature description methods and the learning algorithms used. In addition, we describe some of the databases used for evaluation of methods indicated by a well-known standard developed by the Association for the Advancement of Medical Instrumentation (AAMI) and described in ANSI/AAMI EC57:1998/(R)2008 (ANSI/AAMI, 2008). Finally, we discuss limitations and drawbacks of the methods in the literature presenting concluding remarks and future challenges, and also we propose an evaluation process workflow to guide authors in future works. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  10. Peatland classification of West Siberia based on Landsat imagery

    NASA Astrophysics Data System (ADS)

    Terentieva, I.; Glagolev, M.; Lapshina, E.; Maksyutov, S. S.

    2014-12-01

    Increasing interest in peatlands for prediction of environmental changes requires an understanding of its geographical distribution. West Siberia Plain is the biggest peatland area in Eurasia and is situated in the high latitudes experiencing enhanced rate of climate change. West Siberian taiga mires are important globally, accounting for about 12.5% of the global wetland area. A number of peatland maps of the West Siberia was developed in 1970s, but their accuracy is limited. Here we report the effort in mapping West Siberian peatlands using 30 m resolution Landsat imagery. As a first step, peatland classification scheme oriented on environmental parameter upscaling was developed. The overall workflow involves data pre-processing, training data collection, image classification on a scene-by-scene basis, regrouping of the derived classes into final peatland types and accuracy assessment. To avoid misclassification peatlands were distinguished from other landscapes using threshold method: for each scene, Green-Red Vegetation Indices was used for peatland masking and 5th channel was used for masking water bodies. Peatland image masks were made in Quantum GIS, filtered in MATLAB and then classified in Multispec (Purdue Research Foundation) using maximum likelihood algorithm of supervised classification method. Training sample selection was mostly based on spectral signatures due to limited ancillary and high-resolution image data. As an additional source of information, we applied our field knowledge resulting from more than 10 years of fieldwork in West Siberia summarized in an extensive dataset of botanical relevés, field photos, pH and electrical conductivity data from 40 test sites. After the classification procedure, discriminated spectral classes were generalized into 12 peatland types. Overall accuracy assessment was based on 439 randomly assigned test sites showing final map accuracy was 80%. Total peatland area was estimated at 73.0 Mha. Various ridge

  11. Diversity and Community: The Role of Agent-Based Modeling.

    PubMed

    Stivala, Alex

    2017-03-13

    Community psychology involves several dialectics between potentially opposing ideals, such as theory and practice, rights and needs, and respect for human diversity and sense of community. Some recent papers in the American Journal of Community Psychology have examined the diversity-community dialectic, some with the aid of agent-based modeling and concepts from network science. This paper further elucidates these concepts and suggests that research in community psychology can benefit from a useful dialectic between agent-based modeling and the real-world concerns of community psychology.

  12. Keratoconus: Classification scheme based on videokeratography and clinical signs

    PubMed Central

    Li, Xiaohui; Yang, Huiying; Rabinowitz, Yaron S.

    2013-01-01

    PURPOSE To determine in a longitudinal study whether there is correlation between videokeratography and clinical signs of keratoconus that might be useful to practicing clinicians. SETTING Cornea-Genetic Eye Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA. METHODS Eyes grouped as keratoconus, early keratoconus, keratoconus suspect, or normal based on clinical signs and videokeratography were examined at baseline and followed for 1 to 8 years. Differences in quantitative videokeratography indices and the progression rate were evaluated. The quantitative indices were central keratometry (K), the inferior–superior (I–S) value, and the keratoconus percentage index (KISA). Discriminant analysis was used to estimate the classification rate using the indices. RESULTS There were significant differences at baseline between the normal, keratoconus-suspect, and early keratoconus groups in all indices; the respective means were central K: 44.17 D, 45.13 D, and 45.97 D; I–S: 0.57, 1.20, and 4.44; log(KISA): 2.49, 2.94, and 5.71 (all P<.001 after adjusting for covariates). Over a median follow-up of 4.1 years, approximately 28% in the keratoconus-suspect group progressed to early keratoconus or keratoconus and 75% in the early keratoconus group progressed to keratoconus. Using all 3 indices and age, 86.9% in the normal group, 75.3% in the early keratoconus group, and 44.6% in the keratoconus-suspect group could be classified, yielding a total classification rate of 68.9%. CONCLUSIONS Cross-sectional and longitudinal data showed significant differences between groups in the 3 indices. Use of this classification scheme might form a basis for detecting subclinical keratoconus. PMID:19683159

  13. Kernel-based machine learning techniques for infrasound signal classification

    NASA Astrophysics Data System (ADS)

    Tuma, Matthias; Igel, Christian; Mialle, Pierrick

    2014-05-01

    Infrasound monitoring is one of four remote sensing technologies continuously employed by the CTBTO Preparatory Commission. The CTBTO's infrasound network is designed to monitor the Earth for potential evidence of atmospheric or shallow underground nuclear explosions. Upon completion, it will comprise 60 infrasound array stations distributed around the globe, of which 47 were certified in January 2014. Three stages can be identified in CTBTO infrasound data processing: automated processing at the level of single array stations, automated processing at the level of the overall global network, and interactive review by human analysts. At station level, the cross correlation-based PMCC algorithm is used for initial detection of coherent wavefronts. It produces estimates for trace velocity and azimuth of incoming wavefronts, as well as other descriptive features characterizing a signal. Detected arrivals are then categorized into potentially treaty-relevant versus noise-type signals by a rule-based expert system. This corresponds to a binary classification task at the level of station processing. In addition, incoming signals may be grouped according to their travel path in the atmosphere. The present work investigates automatic classification of infrasound arrivals by kernel-based pattern recognition methods. It aims to explore the potential of state-of-the-art machine learning methods vis-a-vis the current rule-based and task-tailored expert system. To this purpose, we first address the compilation of a representative, labeled reference benchmark dataset as a prerequisite for both classifier training and evaluation. Data representation is based on features extracted by the CTBTO's PMCC algorithm. As classifiers, we employ support vector machines (SVMs) in a supervised learning setting. Different SVM kernel functions are used and adapted through different hyperparameter optimization routines. The resulting performance is compared to several baseline classifiers. All

  14. Applying object-based image analysis and knowledge-based classification to ADS-40 digital aerial photographs to facilitate complex forest land cover classification

    NASA Astrophysics Data System (ADS)

    Hsieh, Yi-Ta; Chen, Chaur-Tzuhn; Chen, Jan-Chang

    2017-01-01

    In general, considerable human and material resources are required for performing a forest inventory survey. Using remote sensing technologies to save forest inventory costs has thus become an important topic in forest inventory-related studies. Leica ADS-40 digital aerial photographs feature advantages such as high spatial resolution, high radiometric resolution, and a wealth of spectral information. As a result, they have been widely used to perform forest inventories. We classified ADS-40 digital aerial photographs according to the complex forest land cover types listed in the Fourth Forest Resource Survey in an effort to establish a classification method for categorizing ADS-40 digital aerial photographs. Subsequently, we classified the images using the knowledge-based classification method in combination with object-based analysis techniques, decision tree classification techniques, classification parameters such as object texture, shape, and spectral characteristics, a class-based classification method, and geographic information system mapping information. Finally, the results were compared with manually interpreted aerial photographs. Images were classified using a hierarchical classification method comprised of four classification levels (levels 1 to 4). The classification overall accuracy (OA) of levels 1 to 4 is within a range of 64.29% to 98.50%. The final result comparisons showed that the proposed classification method achieved an OA of 78.20% and a kappa coefficient of 0.7597. On the basis of the image classification results, classification errors occurred mostly in images of sunlit crowns because the image values for individual trees varied. Such a variance was caused by the crown structure and the incident angle of the sun. These errors lowered image classification accuracy and warrant further studies. This study corroborates the high feasibility for mapping complex forest land cover types using ADS-40 digital aerial photographs.

  15. Comparison Effectiveness of Pixel Based Classification and Object Based Classification Using High Resolution Image In Floristic Composition Mapping (Study Case: Gunung Tidar Magelang City)

    NASA Astrophysics Data System (ADS)

    Ardha Aryaguna, Prama; Danoedoro, Projo

    2016-11-01

    Developments of analysis remote sensing have same way with development of technology especially in sensor and plane. Now, a lot of image have high spatial and radiometric resolution, that's why a lot information. Vegetation object analysis such floristic composition got a lot advantage of that development. Floristic composition can be interpreted using a lot of method such pixel based classification and object based classification. The problems for pixel based method on high spatial resolution image are salt and paper who appear in result of classification. The purpose of this research are compare effectiveness between pixel based classification and object based classification for composition vegetation mapping on high resolution image Worldview-2. The results show that pixel based classification using majority 5×5 kernel windows give the highest accuracy between another classifications. The highest accuracy is 73.32% from image Worldview-2 are being radiometric corrected level surface reflectance, but for overall accuracy in every class, object based are the best between another methods. Reviewed from effectiveness aspect, pixel based are more effective then object based for vegetation composition mapping in Tidar forest.

  16. Adding ecosystem function to agent-based land use models

    PubMed Central

    Yadav, V.; Del Grosso, S.J.; Parton, W.J.; Malanson, G.P.

    2015-01-01

    The objective of this paper is to examine issues in the inclusion of simulations of ecosystem functions in agent-based models of land use decision-making. The reasons for incorporating these simulations include local interests in land fertility and global interests in carbon sequestration. Biogeochemical models are needed in order to calculate such fluxes. The Century model is described with particular attention to the land use choices that it can encompass. When Century is applied to a land use problem the combinatorial choices lead to a potentially unmanageable number of simulation runs. Century is also parameter-intensive. Three ways of including Century output in agent-based models, ranging from separately calculated look-up tables to agents running Century within the simulation, are presented. The latter may be most efficient, but it moves the computing costs to where they are most problematic. Concern for computing costs should not be a roadblock. PMID:26191077

  17. GARLIC: Genomic Autozygosity Regions Likelihood-based Inference and Classification.

    PubMed

    Szpiech, Zachary A; Blant, Alexandra; Pemberton, Trevor J

    2017-07-01

    Runs of homozygosity (ROH) are important genomic features that manifest when identical-by-descent haplotypes are inherited from parents. Their length distributions and genomic locations are informative about population history and they are useful for mapping recessive loci contributing to both Mendelian and complex disease risk. Here, we present software implementing a model-based method ( Pemberton et al., 2012 ) for inferring ROH in genome-wide SNP datasets that incorporates population-specific parameters and a genotyping error rate as well as provides a length-based classification module to identify biologically interesting classes of ROH. Using simulations, we evaluate the performance of this method. GARLIC is written in C ++. Source code and pre-compiled binaries (Windows, OSX and Linux) are hosted on GitHub ( https://github.com/szpiech/garlic ) under the GNU General Public License version 3. zachary.szpiech@ucsf.edu. Supplementary data are available at Bioinformatics online.

  18. Automatic classification of visual evoked potentials based on wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz

    2017-04-01

    Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.

  19. Simulating cancer growth with multiscale agent-based modeling.

    PubMed

    Wang, Zhihui; Butner, Joseph D; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S

    2015-02-01

    There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models.

  20. The evolving classification of soft tissue tumours - an update based on the new 2013 WHO classification.

    PubMed

    Fletcher, Christopher D M

    2014-01-01

    The new World Health Organization (WHO) classification of soft tissue tumours was published in early 2013, almost 11 years after the previous edition. While the number of newly recognized entities included for the first time is fewer than that in 2002, there have instead been substantial steps forward in molecular genetic and cytogenetic characterization of this family of tumours, leading to more reproducible diagnosis, a more meaningful classification scheme and providing new insights regarding pathogenesis, which previously has been obscure in most of these lesions. This brief overview summarizes changes in the classification in each of the broad categories of soft tissue tumour (adipocytic, fibroblastic, etc.) and also provides a short summary of newer genetic data which have been incorporated in the WHO classification.

  1. Automatic Text Classification of English Newswire Articles Based on Statistical Classification Techniques

    NASA Astrophysics Data System (ADS)

    Zu, Guowei; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    The basic process of automatic text classification is learning a classification scheme from training examples then using it to classify unseen textual documents. It is essentially the same as graphic or character pattern recognition process. So the pattern recognition approaches can be used for automatic text categorization. In this research several statistical classification techniques each of which employs Euclidean distance, various similarity measures, linear discriminant function, projection distance, modified projection distance, SVM, nearest-neighbor, have been used for automatic text classification. The principal component analysis was used to reduce the dimensionality of the feature vector. Comparative experiments have been conducted on the Reuters-21578 test collection of English newswire articles. The results illustrate that the efficiency of modified projection distance is totally better than the other methods and the principal component analysis is suitable for reducing the dimensionality of the text features.

  2. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    NASA Astrophysics Data System (ADS)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  3. A Coupled Simulation Architecture for Agent-Based/Geohydrological Modelling

    NASA Astrophysics Data System (ADS)

    Jaxa-Rozen, M.

    2016-12-01

    The quantitative modelling of social-ecological systems can provide useful insights into the interplay between social and environmental processes, and their impact on emergent system dynamics. However, such models should acknowledge the complexity and uncertainty of both of the underlying subsystems. For instance, the agent-based models which are increasingly popular for groundwater management studies can be made more useful by directly accounting for the hydrological processes which drive environmental outcomes. Conversely, conventional environmental models can benefit from an agent-based depiction of the feedbacks and heuristics which influence the decisions of groundwater users. From this perspective, this work describes a Python-based software architecture which couples the popular NetLogo agent-based platform with the MODFLOW/SEAWAT geohydrological modelling environment. This approach enables users to implement agent-based models in NetLogo's user-friendly platform, while benefiting from the full capabilities of MODFLOW/SEAWAT packages or reusing existing geohydrological models. The software architecture is based on the pyNetLogo connector, which provides an interface between the NetLogo agent-based modelling software and the Python programming language. This functionality is then extended and combined with Python's object-oriented features, to design a simulation architecture which couples NetLogo with MODFLOW/SEAWAT through the FloPy library (Bakker et al., 2016). The Python programming language also provides access to a range of external packages which can be used for testing and analysing the coupled models, which is illustrated for an application of Aquifer Thermal Energy Storage (ATES).

  4. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  5. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  6. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A procedure for blending manual and correlation-based synoptic classifications

    NASA Astrophysics Data System (ADS)

    Frakes, Brent; Yarnal, Brent

    1997-11-01

    Manual and correlation-based (also known as Lund or Kirchhofer) classifications are important to synoptic climatology, but both have significant drawbacks. Manual classifications are inherently subjective and labour intensive, whereas correlation-based classifications give the investigator little control over the map-patterns generated by the computer. This paper develops a simple procedure that combines these two classification methods, thereby minimizing these weaknesses. The hybrid procedure utilizes a relatively short-term manual classification to generate composite pressure surfaces, which are then used as seeds in a long-term correlation-based computer classification. Overall, the results show that the hybrid classification reproduces the manual classification while optimizing speed, objectivity and investigator control, thus suggesting that the hybrid procedure is superior to the manual or correlation classifications as they are currently used. More specifically, the results demonstrate little difference between the hybrid procedure and the original manual classification at monthly and longer time-scales, with less internal variation in the hybrid types than in the subjective categories. However, the two classifications showed substantial differences at the daily level, not because of poor performance by the hybrid procedure, but because of errors introduced by the subjectivity of the manual classification.

  8. [Vegetation change in Shenzhen City based on NDVI change classification].

    PubMed

    Li, Yi-Jing; Zeng, Hui; Wel, Jian-Bing

    2008-05-01

    Based on the TM images of 1988 and 2003 as well as the land-use change survey data in 2004, the vegetation change in Shenzhen City was assessed by a NDVI (normalized difference vegetation index) change classification method, and the impacts from natural and social constraining factors were analyzed. The results showed that as a whole, the rapid urbanization in 1988-2003 had less impact on the vegetation cover in the City, but in its plain areas with low altitude, the vegetation cover degraded more obviously. The main causes of the localized ecological degradation were the invasion of built-ups to woods and orchards, land transformation from woods to orchards at the altitude of above 100 m, and low percentage of green land in some built-ups areas. In the future, the protection and construction of vegetation in Shenzhen should focus on strengthening the protection and restoration of remnant woods, trying to avoid the built-ups' expansion to woods and orchards where are better vegetation-covered, rectifying the unreasonable orchard constructions at the altitude of above 100 m, and consolidating the greenbelt construction inside the built-ups. It was considered that the NDVI change classification method could work well in efficiently uncovering the trend of macroscale vegetation change, and avoiding the effect of random noise in data.

  9. Lung sound classification using cepstral-based statistical features.

    PubMed

    Sengupta, Nandini; Sahidullah, Md; Saha, Goutam

    2016-08-01

    Lung sounds convey useful information related to pulmonary pathology. In this paper, short-term spectral characteristics of lung sounds are studied to characterize the lung sounds for the identification of associated diseases. Motivated by the success of cepstral features in speech signal classification, we evaluate five different cepstral features to recognize three types of lung sounds: normal, wheeze and crackle. Subsequently for fast and efficient classification, we propose a new feature set computed from the statistical properties of cepstral coefficients. Experiments are conducted on a dataset of 30 subjects using the artificial neural network (ANN) as a classifier. Results show that the statistical features extracted from mel-frequency cepstral coefficients (MFCCs) of lung sounds outperform commonly used wavelet-based features as well as standard cepstral coefficients including MFCCs. Further, we experimentally optimize different control parameters of the proposed feature extraction algorithm. Finally, we evaluate the features for noisy lung sound recognition. We have found that our newly investigated features are more robust than existing features and show better recognition accuracy even in low signal-to-noise ratios (SNRs).

  10. Half-Face Dictionary Integration for Representation-Based Classification.

    PubMed

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Wu, Xiao-Jun

    2017-01-01

    This paper presents a half-face dictionary integration (HFDI) algorithm for representation-based classification. The proposed HFDI algorithm measures residuals between an input signal and the reconstructed one, using both the original and the synthesized dual-column (row) half-face training samples. More specifically, we first generate a set of virtual half-face samples for the purpose of training data augmentation. The aim is to obtain high-fidelity collaborative representation of a test sample. In this half-face integrated dictionary, each original training vector is replaced by an integrated dual-column (row) half-face matrix. Second, to reduce the redundancy between the original dictionary and the extended half-face dictionary, we propose an elimination strategy to gain the most robust training atoms. The last contribution of the proposed HFDI method is the use of a competitive fusion method weighting the reconstruction residuals from different dictionaries for robust face classification. Experimental results obtained from the Facial Recognition Technology, Aleix and Robert, Georgia Tech, ORL, and Carnegie Mellon University-pose, illumination and expression data sets demonstrate the effectiveness of the proposed method, especially in the case of the small sample size problem.

  11. Half-Face Dictionary Integration for Representation-Based Classification.

    PubMed

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Wu, Xiao-Jun

    2015-12-24

    This paper presents a half-face dictionary integration (HFDI) algorithm for representation-based classification. The proposed HFDI algorithm measures residuals between an input signal and the reconstructed one, using both the original and the synthesized dual-column (row) half-face training samples. More specifically, we first generate a set of virtual half-face samples for the purpose of training data augmentation. The aim is to obtain high-fidelity collaborative representation of a test sample. In this half-face integrated dictionary, each original training vector is replaced by an integrated dual-column (row) half-face matrix. Second, to reduce the redundancy between the original dictionary and the extended half-face dictionary, we propose an elimination strategy to gain the most robust training atoms. The last contribution of the proposed HFDI method is the use of a competitive fusion method weighting the reconstruction residuals from different dictionaries for robust face classification. Experimental results obtained from the Facial Recognition Technology, Aleix and Robert, Georgia Tech, ORL, and Carnegie Mellon University-pose, illumination and expression data sets demonstrate the effectiveness of the proposed method, especially in the case of the small sample size problem.

  12. Cloud detection and classification based on MAX-DOAS observations

    NASA Astrophysics Data System (ADS)

    Wagner, T.; Apituley, A.; Beirle, S.; Dörner, S.; Friess, U.; Remmers, J.; Shaiganfar, R.

    2014-05-01

    Multi-axis differential optical absorption spectroscopy (MAX-DOAS) observations of aerosols and trace gases can be strongly influenced by clouds. Thus, it is important to identify clouds and characterise their properties. In this study we investigate the effects of clouds on several quantities which can be derived from MAX-DOAS observations, like radiance, the colour index (radiance ratio at two selected wavelengths), the absorption of the oxygen dimer O4 and the fraction of inelastically scattered light (Ring effect). To identify clouds, these quantities can be either compared to their corresponding clear-sky reference values, or their dependencies on time or viewing direction can be analysed. From the investigation of the temporal variability the influence of clouds can be identified even for individual measurements. Based on our investigations we developed a cloud classification scheme, which can be applied in a flexible way to MAX-DOAS or zenith DOAS observations: in its simplest version, zenith observations of the colour index are used to identify the presence of clouds (or high aerosol load). In more sophisticated versions, other quantities and viewing directions are also considered, which allows subclassifications like, e.g., thin or thick clouds, or fog. We applied our cloud classification scheme to MAX-DOAS observations during the Cabauw intercomparison campaign of Nitrogen Dioxide measuring instruments (CINDI) campaign in the Netherlands in summer 2009 and found very good agreement with sky images taken from the ground and backscatter profiles from a lidar.

  13. Texture-Based Automated Lithological Classification Using Aeromagenetic Anomaly Images

    USGS Publications Warehouse

    Shankar, Vivek

    2009-01-01

    This report consists of a thesis submitted to the faculty of the Department of Electrical and Computer Engineering, in partial fulfillment of the requirements for the degree of Master of Science, Graduate College, The University of Arizona, 2004 Aeromagnetic anomaly images are geophysical prospecting tools frequently used in the exploration of metalliferous minerals and hydrocarbons. The amplitude and texture content of these images provide a wealth of information to geophysicists who attempt to delineate the nature of the Earth's upper crust. These images prove to be extremely useful in remote areas and locations where the minerals of interest are concealed by basin fill. Typically, geophysicists compile a suite of aeromagnetic anomaly images, derived from amplitude and texture measurement operations, in order to obtain a qualitative interpretation of the lithological (rock) structure. Texture measures have proven to be especially capable of capturing the magnetic anomaly signature of unique lithological units. We performed a quantitative study to explore the possibility of using texture measures as input to a machine vision system in order to achieve automated classification of lithological units. This work demonstrated a significant improvement in classification accuracy over random guessing based on a priori probabilities. Additionally, a quantitative comparison between the performances of five classes of texture measures in their ability to discriminate lithological units was achieved.

  14. Agents.

    PubMed

    Chambers, David W

    2002-01-01

    Although health care is inherently an economic activity, it is inadequately described as a market process. An alternative, grounded in organizational economic theory, is to view professionals and many others as agents, contracted to advance the best interests of their principals (patients). This view untangles some of the ethical conflicts in dentistry. It also helps identify major controllable costs in dentistry and suggests that dentists can act as a group to increase or decrease agency costs, primarily by controlling the bad actors who damage the value of all dentists.

  15. Laser-induced Mg production from magnesium oxide using Si-based agents and Si-based agents recycling

    NASA Astrophysics Data System (ADS)

    Liao, S. H.; Yabe, T.; Mohamed, M. S.; Baasandash, C.; Sato, Y.; Fukushima, C.; Ichikawa, M.; Nakatsuka, M.; Uchida, S.; Ohkubo, T.

    2011-01-01

    We succeeded in laser-induced magnesium (Mg) production from magnesium oxide (MgO) using Si-based agents, silicon (Si) and silicon monoxide (SiO). In these experiments, a cw CO2 laser irradiated a mixture of Mg and Si-based agents. Both experimental studies and theoretical analysis help not only understand the function of reducing agents but also optimize Mg extraction in laser-induced Mg production. The optimal energy efficiencies 12.1 mg/kJ and 4.5 mg/kJ of Mg production were achieved using Si and SiO, respectively. Besides, the possibility of recycling Si and SiO was preliminarily investigated without reducing agents but only with laser-irradiation. As for the Si-based agents recycling, we succeed in removing 36 mol % of oxygen fraction from SiO2 , obtaining 0.7 mg/kJ of Si production efficiency as well as 15.6 mg/kJ of SiO one at the same time. In addition, the laser irradiation to MgO-SiO mixture produced 24 mg/kJ of Si with more than 99% purity.

  16. Web-based Agents for Reengineering Engineering Education.

    ERIC Educational Resources Information Center

    Cao, Lilian; Bengu, Golgen

    2000-01-01

    Describes four Web-based agents developed for reengineering a freshman chemistry laboratory education: the "intelligent tutoring tool" that conducts online problem-solving coaching; "the adaptive lecture guide" that provides navigation guidance sensitive to students' knowledge status; the "student modeler" that assesses students' knowledge…

  17. A Large Scale, High Resolution Agent-Based Insurgency Model

    DTIC Science & Technology

    2013-09-30

    2007). HSCB Models can be employed for simulating mission scenarios, determining optimal strategies for disrupting terrorist networks, or training and...High Resolution Agent-Based Insurgency Model ∑ = ⎜ ⎜ ⎝ ⎛ − −− = desired 1 move,desired, desired,,desired, desired,, N j ij jmoveij moveiD rp prp

  18. Modeling civil violence: An agent-based computational approach

    PubMed Central

    Epstein, Joshua M.

    2002-01-01

    This article presents an agent-based computational model of civil violence. Two variants of the civil violence model are presented. In the first a central authority seeks to suppress decentralized rebellion. In the second a central authority seeks to suppress communal violence between two warring ethnic groups. PMID:11997450

  19. EVA: Collaborative Distributed Learning Environment Based in Agents.

    ERIC Educational Resources Information Center

    Sheremetov, Leonid; Tellez, Rolando Quintero

    In this paper, a Web-based learning environment developed within the project called Virtual Learning Spaces (EVA, in Spanish) is presented. The environment is composed of knowledge, collaboration, consulting, experimentation, and personal spaces as a collection of agents and conventional software components working over the knowledge domains. All…

  20. Agent-based Approaches to Dynamic Team Simulation

    DTIC Science & Technology

    2008-09-01

    behavior. The second section reviews agent-based models of teamwork describing work involving both teamwork approaches to design of multiagent systems...there is less direct evidence for teams. Hough (1992), for example, found that ratings on conscientiousness, emotional stability, and agreeableness...Peeters, Rutte, Tuijl, and Reymen (2006) who found agreeableness and emotional stability positively related to satisfaction with the team make

  1. An Agent-based Framework for Web Query Answering.

    ERIC Educational Resources Information Center

    Wang, Huaiqing; Liao, Stephen; Liao, Lejian

    2000-01-01

    Discusses discrepancies between user queries on the Web and the answers provided by information sources; proposes an agent-based framework for Web mining tasks; introduces an object-oriented deductive data model and a flexible query language; and presents a cooperative mechanism for query answering. (Author/LRW)

  2. Agent-Based Models in Empirical Social Research

    ERIC Educational Resources Information Center

    Bruch, Elizabeth; Atwell, Jon

    2015-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first…

  3. Adding ecosystem function to agent-based land use models

    USDA-ARS?s Scientific Manuscript database

    The objective of this paper is to examine issues in the inclusion of simulations of ecosystem functions in agent-based models of land use decision-making. The reasons for incorporating these simulations include local interests in land fertility and global interests in carbon sequestration. Biogeoche...

  4. Solution of partial differential equations by agent-based simulation

    NASA Astrophysics Data System (ADS)

    Szilagyi, Miklos N.

    2014-01-01

    The purpose of this short note is to demonstrate that partial differential equations can be quickly solved by agent-based simulation with high accuracy. There is no need for the solution of large systems of algebraic equations. This method is especially useful for quick determination of potential distributions and demonstration purposes in teaching electromagnetism.

  5. [The system of management nocturnal enuresis based on functional classification].

    PubMed

    Kroll, Paweł; Zachwieja, Jacek

    2006-01-01

    The aim of the study was to describe our diagnostic and therapeutic logarithm based on functional classification in children with enuresis, and effects of therapy based on this classification. we reviewed charts of 123 children managed because of nocturnal enuresis (68 boys, 55 girls, aged 4-18 (mean 7,6) years). Every child had routinely performed ultrasonography, urinalysis, uroflowmetry with estimation of residual urine. Children with urinary tract infections or malformations of the urinary tract were not included in this study. At the first visit all children were instructed to conduct voiding diary. On the base of data from voiding diarys and uroflowmetries children are divided in two groups: Group I (n=21) with monosymptomatic nocturnal enuresis. Group II (n=102) children with bladder dysfunction and enuresis. In the first group rehabilitation program with bladder training, conducting voiding diary and conditioning therapy with alarm device was introduced. In children with bladder dysfunctions therapy started with bladder training and pharmacotherapy of bladder dysfunction. 9 children (6 from Group I and 3 from Group II) started to wake after starting bladder training. 81 children from Group II improved bladder function. 30 children from Group II started to wake up during therapy of bladder dysfunction. In 44 children, who improved bladder function and still had episodes of nocturnal enuresis, therapy with alarm device was introduced. From all 66 children treated with alarm device 5 started to wake up without any one episode of wetting. In 20 children the ability to wake up before alarm started to ring occurred in the first month of therapy. 40 children need to be treated for the second month, in 5 children therapy was prolonged for the third month. 9 children did not learn to wake up for urination. We have 8 drop-outs. In 7 therapy was repeated because of recurrence. The system of treatment of nocturnal enuresis is effective both in children with

  6. Magnetic-resonance-based system for chemical agent screening

    NASA Astrophysics Data System (ADS)

    Kumar, Sankaran; Magnuson, Erik E.; Newman, David E.; Prado, Pablo J.; Lawton, Jess

    2003-09-01

    Quantum Magnetics is developing a system based on magnetic resonance (MR), combined with a proprietary technology, to screen for chemical agents in nonmetallic containers, without the need to open the container. It derives from the successful design and testing of a similar system for detecting liquid explosives. Preliminary measurements indicate that the system promises to quickly screen for many chemical agents and to offer an unambiguous hazard/safe result. The system will be designed to be portable and easy to operate, to need minimal human interpretation, and to be ideal for operation at checkpoints, government building, airports, and the like.

  7. An Agent Based Model for Social Class Emergence

    NASA Astrophysics Data System (ADS)

    Yang, Xiaoxiang; Rodriguez Segura, Daniel; Lin, Fei; Mazilu, Irina

    We present an open system agent-based model to analyze the effects of education and the society-specific wealth transactions on the emergence of social classes. Building on previous studies, we use realistic functions to model how years of education affect the income level. Numerical simulations show that the fraction of an individual's total transactions that is invested rather than consumed can cause wealth gaps between different income brackets in the long run. In an attempt to incorporate the network effects, we also explore how the probability of interactions among agents depending on the spread of their income brackets affects wealth distribution.

  8. Personalized E- learning System Based on Intelligent Agent

    NASA Astrophysics Data System (ADS)

    Duo, Sun; Ying, Zhou Cai

    Lack of personalized learning is the key shortcoming of traditional e-Learning system. This paper analyzes the personal characters in e-Learning activity. In order to meet the personalized e-learning, a personalized e-learning system based on intelligent agent was proposed and realized in the paper. The structure of system, work process, the design of intelligent agent and the realization of intelligent agent were introduced in the paper. After the test use of the system by certain network school, we found that the system could improve the learner's initiative participation, which can provide learners with personalized knowledge service. Thus, we thought it might be a practical solution to realize self- learning and self-promotion in the lifelong education age.

  9. Techniques and Issues in Agent-Based Modeling Validation

    SciTech Connect

    Pullum, Laura L; Cui, Xiaohui

    2012-01-01

    Validation of simulation models is extremely important. It ensures that the right model has been built and lends confidence to the use of that model to inform critical decisions. Agent-based models (ABM) have been widely deployed in different fields for studying the collective behavior of large numbers of interacting agents. However, researchers have only recently started to consider the issues of validation. Compared to other simulation models, ABM has many differences in model development, usage and validation. An ABM is inherently easier to build than a classical simulation, but more difficult to describe formally since they are closer to human cognition. Using multi-agent models to study complex systems has attracted criticisms because of the challenges involved in their validation [1]. In this report, we describe the challenge of ABM validation and present a novel approach we recently developed for an ABM system.

  10. Agent-based reasoning for distributed multi-INT analysis

    NASA Astrophysics Data System (ADS)

    Inchiosa, Mario E.; Parker, Miles T.; Perline, Richard

    2006-05-01

    Fully exploiting the intelligence community's exponentially growing data resources will require computational approaches differing radically from those currently available. Intelligence data is massive, distributed, and heterogeneous. Conventional approaches requiring highly structured and centralized data will not meet this challenge. We report on a new approach, Agent-Based Reasoning (ABR). In NIST evaluations, the use of ABR software tripled analysts' solution speed, doubled accuracy, and halved perceived difficulty. ABR makes use of populations of fine-grained, locally interacting agents that collectively reason about intelligence scenarios in a self-organizing, "bottom-up" process akin to those found in biological and other complex systems. Reproduction rules allow agents to make inferences from multi-INT data, while movement rules organize information and optimize reasoning. Complementary deterministic and stochastic agent behaviors enhance reasoning power and flexibility. Agent interaction via small-world networks - such as are found in nervous systems, social networks, and power distribution grids - dramatically increases the rate of discovering intelligence fragments that usefully connect to yield new inferences. Small-world networks also support the distributed processing necessary to address intelligence community data challenges. In addition, we have found that ABR pre-processing can boost the performance of commercial text clustering software. Finally, we have demonstrated interoperability with Knowledge Engineering systems and seen that reasoning across diverse data sources can be a rich source of inferences.

  11. Efficient Agent-Based Models for Non-Genomic Evolution

    NASA Technical Reports Server (NTRS)

    Gupta, Nachi; Agogino, Adrian; Tumer, Kagan

    2006-01-01

    Modeling dynamical systems composed of aggregations of primitive proteins is critical to the field of astrobiological science involving early evolutionary structures and the origins of life. Unfortunately traditional non-multi-agent methods either require oversimplified models or are slow to converge to adequate solutions. This paper shows how to address these deficiencies by modeling the protein aggregations through a utility based multi-agent system. In this method each agent controls the properties of a set of proteins assigned to that agent. Some of these properties determine the dynamics of the system, such as the ability for some proteins to join or split other proteins, while additional properties determine the aggregation s fitness as a viable primitive cell. We show that over a wide range of starting conditions, there are mechanisins that allow protein aggregations to achieve high values of overall fitness. In addition through the use of agent-specific utilities that remain aligned with the overall global utility, we are able to reach these conclusions with 50 times fewer learning steps.

  12. Efficient Agent-Based Models for Non-Genomic Evolution

    NASA Technical Reports Server (NTRS)

    Gupta, Nachi; Agogino, Adrian; Tumer, Kagan

    2006-01-01

    Modeling dynamical systems composed of aggregations of primitive proteins is critical to the field of astrobiological science involving early evolutionary structures and the origins of life. Unfortunately traditional non-multi-agent methods either require oversimplified models or are slow to converge to adequate solutions. This paper shows how to address these deficiencies by modeling the protein aggregations through a utility based multi-agent system. In this method each agent controls the properties of a set of proteins assigned to that agent. Some of these properties determine the dynamics of the system, such as the ability for some proteins to join or split other proteins, while additional properties determine the aggregation s fitness as a viable primitive cell. We show that over a wide range of starting conditions, there are mechanisins that allow protein aggregations to achieve high values of overall fitness. In addition through the use of agent-specific utilities that remain aligned with the overall global utility, we are able to reach these conclusions with 50 times fewer learning steps.

  13. Gd-HOPO Based High Relaxivity MRI Contrast Agents

    SciTech Connect

    Datta, Ankona; Raymond, Kenneth

    2008-11-06

    Tris-bidentate HOPO-based ligands developed in our laboratory were designed to complement the coordination preferences of Gd{sup 3+}, especially its oxophilicity. The HOPO ligands provide a hexadentate coordination environment for Gd{sup 3+} in which all he donor atoms are oxygen. Because Gd{sup 3+} favors eight or nine coordination, this design provides two to three open sites for inner-sphere water molecules. These water molecules rapidly exchange with bulk solution, hence affecting the relaxation rates of bulk water olecules. The parameters affecting the efficiency of these contrast agents have been tuned to improve contrast while still maintaining a high thermodynamic stability for Gd{sup 3+} binding. The Gd- HOPO-based contrast agents surpass current commercially available agents ecause of a higher number of inner-sphere water molecules, rapid exchange of inner-sphere water molecules via an associative mechanism, and a long electronic relaxation time. The contrast enhancement provided by these agents is at least twice that of commercial contrast gents, which are based on polyaminocarboxylate ligands.

  14. Peptide-based imaging agents for cancer detection☆

    PubMed Central

    Sun, Xiaolian; Li, Yesen; Liu, Ting; Li, Zijing; Zhang, Xianzhong; Chen, Xiaoyuan

    2017-01-01

    Selective receptor-targeting peptide based agents have attracted considerable attention in molecular imaging of tumor cells that overexpress corresponding peptide receptors due to their unique properties such as rapid clearance from circulation as well as high affinities and specificities for their targets. The rapid growth of chemistry modification techniques has enabled the design and development of various peptide-based imaging agents with enhanced metabolic stability, favorable pharmacokinetics, improved binding affinity and selectivity, better imaging ability as well as biosafety. Among them, many radiolabeled peptides have already been translated into the clinic with impressive diagnostic accuracy and sensitivity. This review summarizes the current status in the development of peptide-based imaging agents with an emphasis on the consideration of probe design including the identification of suitable peptides, the chemical modification of probes and the criteria for clinical translation. Specific examples in clinical trials have been provided as well with respect to their diagnostic capability compared with other FDA approved imaging agents. PMID:27327937

  15. Can agent based models effectively reduce fisheries management implementation uncertainty?

    NASA Astrophysics Data System (ADS)

    Drexler, M.

    2016-02-01

    Uncertainty is an inherent feature of fisheries management. Implementation uncertainty remains a challenge to quantify often due to unintended responses of users to management interventions. This problem will continue to plague both single species and ecosystem based fisheries management advice unless the mechanisms driving these behaviors are properly understood. Equilibrium models, where each actor in the system is treated as uniform and predictable, are not well suited to forecast the unintended behaviors of individual fishers. Alternatively, agent based models (AMBs) can simulate the behaviors of each individual actor driven by differing incentives and constraints. This study evaluated the feasibility of using AMBs to capture macro scale behaviors of the US West Coast Groundfish fleet. Agent behavior was specified at the vessel level. Agents made daily fishing decisions using knowledge of their own cost structure, catch history, and the histories of catch and quota markets. By adding only a relatively small number of incentives, the model was able to reproduce highly realistic macro patterns of expected outcomes in response to management policies (catch restrictions, MPAs, ITQs) while preserving vessel heterogeneity. These simulations indicate that agent based modeling approaches hold much promise for simulating fisher behaviors and reducing implementation uncertainty. Additional processes affecting behavior, informed by surveys, are continually being added to the fisher behavior model. Further coupling of the fisher behavior model to a spatial ecosystem model will provide a fully integrated social, ecological, and economic model capable of performing management strategy evaluations to properly consider implementation uncertainty in fisheries management.

  16. Fines classification based on sensitivity to pore-fluid chemistry

    USGS Publications Warehouse

    Jang, Junbong; Santamarina, J. Carlos

    2016-01-01

    The 75-μm particle size is used to discriminate between fine and coarse grains. Further analysis of fine grains is typically based on the plasticity chart. Whereas pore-fluid-chemistry-dependent soil response is a salient and distinguishing characteristic of fine grains, pore-fluid chemistry is not addressed in current classification systems. Liquid limits obtained with electrically contrasting pore fluids (deionized water, 2-M NaCl brine, and kerosene) are combined to define the soil “electrical sensitivity.” Liquid limit and electrical sensitivity can be effectively used to classify fine grains according to their fluid-soil response into no-, low-, intermediate-, or high-plasticity fine grains of low, intermediate, or high electrical sensitivity. The proposed methodology benefits from the accumulated experience with liquid limit in the field and addresses the needs of a broader range of geotechnical engineering problems.

  17. Classification of genes based on gene expression analysis

    SciTech Connect

    Angelova, M. Myers, C. Faith, J.

    2008-05-15

    Systems biology and bioinformatics are now major fields for productive research. DNA microarrays and other array technologies and genome sequencing have advanced to the point that it is now possible to monitor gene expression on a genomic scale. Gene expression analysis is discussed and some important clustering techniques are considered. The patterns identified in the data suggest similarities in the gene behavior, which provides useful information for the gene functionalities. We discuss measures for investigating the homogeneity of gene expression data in order to optimize the clustering process. We contribute to the knowledge of functional roles and regulation of E. coli genes by proposing a classification of these genes based on consistently correlated genes in expression data and similarities of gene expression patterns. A new visualization tool for targeted projection pursuit and dimensionality reduction of gene expression data is demonstrated.

  18. Multi-scale classification based lesion segmentation for dermoscopic images.

    PubMed

    Abedini, Mani; Codella, Noel; Chakravorty, Rajib; Garnavi, Rahil; Gutman, David; Helba, Brian; Smith, John R

    2016-08-01

    This paper presents a robust segmentation method based on multi-scale classification to identify the lesion boundary in dermoscopic images. Our proposed method leverages a collection of classifiers which are trained at various resolutions to categorize each pixel as "lesion" or "surrounding skin". In detection phase, trained classifiers are applied on new images. The classifier outputs are fused at pixel level to build probability maps which represent lesion saliency maps. In the next step, Otsu thresholding is applied to convert the saliency maps to binary masks, which determine the border of the lesions. We compared our proposed method with existing lesion segmentation methods proposed in the literature using two dermoscopy data sets (International Skin Imaging Collaboration and Pedro Hispano Hospital) which demonstrates the superiority of our method with Dice Coefficient of 0.91 and accuracy of 94%.

  19. Deep neural network and noise classification-based speech enhancement

    NASA Astrophysics Data System (ADS)

    Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei

    2017-07-01

    In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.

  20. Classification of cassava genotypes based on qualitative and quantitative data.

    PubMed

    Oliveira, E J; Oliveira Filho, O S; Santos, V S

    2015-02-02

    We evaluated the genetic variation of cassava accessions based on qualitative (binomial and multicategorical) and quantitative traits (continuous). We characterized 95 accessions obtained from the Cassava Germplasm Bank of Embrapa Mandioca e Fruticultura; we evaluated these accessions for 13 continuous, 10 binary, and 25 multicategorical traits. First, we analyzed the accessions based only on quantitative traits; next, we conducted joint analysis (qualitative and quantitative traits) based on the Ward-MLM method, which performs clustering in two stages. According to the pseudo-F, pseudo-t2, and maximum likelihood criteria, we identified five and four groups based on quantitative trait and joint analysis, respectively. The smaller number of groups identified based on joint analysis may be related to the nature of the data. On the other hand, quantitative data are more subject to environmental effects in the phenotype expression; this results in the absence of genetic differences, thereby contributing to greater differentiation among accessions. For most of the accessions, the maximum probability of classification was >0.90, independent of the trait analyzed, indicating a good fit of the clustering method. Differences in clustering according to the type of data implied that analysis of quantitative and qualitative traits in cassava germplasm might explore different genomic regions. On the other hand, when joint analysis was used, the means and ranges of genetic distances were high, indicating that the Ward-MLM method is very useful for clustering genotypes when there are several phenotypic traits, such as in the case of genetic resources and breeding programs.

  1. ICA-Based Imagined Conceptual Words Classification on EEG Signals.

    PubMed

    Imani, Ehsan; Pourmohammad, Ali; Bagheri, Mahsa; Mobasheri, Vida

    2017-01-01

    function, the classification accuracies were almost the same and not very different. Linear discriminant analysis (LDA) in comparison with the neural network yielded higher classification accuracies. ICA is a suitable algorithm for recognizing of the word's concept and its place in the brain. Achieved results from this experiment were the same compared with the results from other methods such as functional magnetic resonance imaging and methods based on the brain signals (EEG) in the vowel imagination and covert speech. Herein, the highest classification accuracy was obtained by extracting the target signal from the output of the ICA and extracting the features of coefficients AR model with time interval of 2.5 s. Finally, LDA resulted in the highest classification accuracy more than 60%.

  2. The generalization ability of online SVM classification based on Markov sampling.

    PubMed

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  3. A comparative study on classification of sleep stage based on EEG signals using feature selection and classification algorithms.

    PubMed

    Şen, Baha; Peker, Musa; Çavuşoğlu, Abdullah; Çelebi, Fatih V

    2014-03-01

    Sleep scoring is one of the most important diagnostic methods in psychiatry and neurology. Sleep staging is a time consuming and difficult task undertaken by sleep experts. This study aims to identify a method which would classify sleep stages automatically and with a high degree of accuracy and, in this manner, will assist sleep experts. This study consists of three stages: feature extraction, feature selection from EEG signals, and classification of these signals. In the feature extraction stage, it is used 20 attribute algorithms in four categories. 41 feature parameters were obtained from these algorithms. Feature selection is important in the elimination of irrelevant and redundant features and in this manner prediction accuracy is improved and computational overhead in classification is reduced. Effective feature selection algorithms such as minimum redundancy maximum relevance (mRMR); fast correlation based feature selection (FCBF); ReliefF; t-test; and Fisher score algorithms are preferred at the feature selection stage in selecting a set of features which best represent EEG signals. The features obtained are used as input parameters for the classification algorithms. At the classification stage, five different classification algorithms (random forest (RF); feed-forward neural network (FFNN); decision tree (DT); support vector machine (SVM); and radial basis function neural network (RBF)) classify the problem. The results, obtained from different classification algorithms, are provided so that a comparison can be made between computation times and accuracy rates. Finally, it is obtained 97.03 % classification accuracy using the proposed method. The results show that the proposed method indicate the ability to design a new intelligent assistance sleep scoring system.

  4. Validation of the determinant-based classification and revision of the Atlanta classification systems for acute pancreatitis.

    PubMed

    Acevedo-Piedra, Nelly G; Moya-Hoyo, Neftalí; Rey-Riveiro, Mónica; Gil, Santiago; Sempere, Laura; Martínez, Juan; Lluís, Félix; Sánchez-Payá, José; de-Madaria, Enrique

    2014-02-01

    Two new classification systems for the severity of acute pancreatitis (AP) have been proposed, the determinant-based classification (DBC) and a revision of the Atlanta classification (RAC). Our aim was to validate and compare these classification systems. We analyzed data from adult patients with AP (543 episodes of AP in 459 patients) who were admitted to Hospital General Universitario de Alicante from December 2007 to February 2013. Imaging results were reviewed, and the classification systems were validated and compared in terms of outcomes. Pancreatic necrosis was present in 66 of the patients (12%), peripancreatic necrosis in 109 (20%), walled-off necrosis in 61 (11%), acute peripancreatic fluid collections in 98 (18%), and pseudocysts in 19 (4%). Transient and persistent organ failures were present in 31 patients (6%) and 21 patients (4%), respectively. Sixteen patients (3%) died. On the basis of the DBC, 386 (71%), 131 (24%), 23 (4%), and 3 (0.6%) patients were determined to have mild, moderate, severe, or critical AP, respectively. On the basis of the RAC, 363 patients (67%), 160 patients (30%), and 20 patients (4%) were determined to have mild, moderately severe, or severe AP, respectively. The different categories of severity for each classification system were associated with statistically significant and clinically relevant differences in length of hospital stay, need for admission to the intensive care unit, nutritional support, invasive treatment, and in-hospital mortality. In comparing similar categories between the classification systems, no significant differences were found. The DBC and the RAC accurately classify the severity of AP in subgroups of patients. Copyright © 2014 AGA Institute. Published by Elsevier Inc. All rights reserved.

  5. A New Classification Based on the Kaban's Modification for Surgical Management of Craniofacial Microsomia

    PubMed Central

    Madrid, Jose Rolando Prada; Montealegre, Giovanni; Gomez, Viviana

    2010-01-01

    In medicine, classifications are designed to describe accurately and reliably all anatomic and structural components, establish a prognosis, and guide a given treatment. Classifications should be useful in a universal way to facilitate communication between health professionals and to formulate management protocols. In many situations and particularly with craniofacial microsomia, there have been many different classifications that do not achieve this goal. In fact, when there are so many classifications, one can conclude that there is not a clear one that accomplishes all these ends and defines a treatment protocol. It is our intent to present a new classification based on the Pruzansky's classification, later modified by Kaban, to determine treatment protocols based on the degree of osseous deficiency present in the body, ramus, and temporomandibular joint. Different mandibular defects are presented in two patients with craniofacial microsomia type III and IV according to our classification with the corresponding management proposed for each type and adequate functional results. PMID:22110812

  6. Docking-based classification models for exploratory toxicology ...

    EPA Pesticide Factsheets

    Background: Exploratory toxicology is a new emerging research area whose ultimate mission is that of protecting human health and environment from risks posed by chemicals. In this regard, the ethical and practical limitation of animal testing has encouraged the promotion of computational methods for the fast screening of huge collections of chemicals available on the market. Results: We derived 24 reliable docking-based classification models able to predict the estrogenic potential of a large collection of chemicals having high quality experimental data, kindly provided by the U.S. Environmental Protection Agency (EPA). The predictive power of our docking-based models was supported by values of AUC, EF1% (EFmax = 7.1), -LR (at SE = 0.75) and +LR (at SE = 0.25) ranging from 0.63 to 0.72, from 2.5 to 6.2, from 0.35 to 0.67 and from 2.05 to 9.84, respectively. In addition, external predictions were successfully made on some representative known estrogenic chemicals. Conclusion: We show how structure-based methods, widely applied to drug discovery programs, can be adapted to meet the conditions of the regulatory context. Importantly, these methods enable one to employ the physicochemical information contained in the X-ray solved biological target and to screen structurally-unrelated chemicals. Shows how structure-based methods, widely applied to drug discovery programs, can be adapted to meet the conditions of the regulatory context. Evaluation of 24 reliable dockin

  7. An information-based network approach for protein classification

    PubMed Central

    Wan, Xiaogeng; Zhao, Xin; Yau, Stephen S. T.

    2017-01-01

    Protein classification is one of the critical problems in bioinformatics. Early studies used geometric distances and polygenetic-tree to classify proteins. These methods use binary trees to present protein classification. In this paper, we propose a new protein classification method, whereby theories of information and networks are used to classify the multivariate relationships of proteins. In this study, protein universe is modeled as an undirected network, where proteins are classified according to their connections. Our method is unsupervised, multivariate, and alignment-free. It can be applied to the classification of both protein sequences and structures. Nine examples are used to demonstrate the efficiency of our new method. PMID:28350835

  8. Hyperspectral image classification based on NMF Features Selection Method

    NASA Astrophysics Data System (ADS)

    Abe, Bolanle T.; Jordaan, J. A.

    2013-12-01

    Hyperspectral instruments are capable of collecting hundreds of images corresponding to wavelength channels for the same area on the earth surface. Due to the huge number of features (bands) in hyperspectral imagery, land cover classification procedures are computationally expensive and pose a problem known as the curse of dimensionality. In addition, higher correlation among contiguous bands increases the redundancy within the bands. Hence, dimension reduction of hyperspectral data is very crucial so as to obtain good classification accuracy results. This paper presents a new feature selection technique. Non-negative Matrix Factorization (NMF) algorithm is proposed to obtain reduced relevant features in the input domain of each class label. This aimed to reduce classification error and dimensionality of classification challenges. Indiana pines of the Northwest Indiana dataset is used to evaluate the performance of the proposed method through experiments of features selection and classification. The Waikato Environment for Knowledge Analysis (WEKA) data mining framework is selected as a tool to implement the classification using Support Vector Machines and Neural Network. The selected features subsets are subjected to land cover classification to investigate the performance of the classifiers and how the features size affects classification accuracy. Results obtained shows that performances of the classifiers are significant. The study makes a positive contribution to the problems of hyperspectral imagery by exploring NMF, SVMs and NN to improve classification accuracy. The performances of the classifiers are valuable for decision maker to consider tradeoffs in method accuracy versus method complexity.

  9. [Classification of human sleep stages based on EEG processing using hidden Markov models].

    PubMed

    Doroshenkov, L G; Konyshev, V A; Selishchev, S V

    2007-01-01

    The goal of this work was to describe an automated system for classification of human sleep stages. Classification of sleep stages is an important problem of diagnosis and treatment of human sleep disorders. The developed classification method is based on calculation of characteristics of the main sleep rhythms. It uses hidden Markov models. The method is highly accurate and provides reliable identification of the main stages of sleep. The results of automatic classification are in good agreement with the results of sleep stage identification performed by an expert somnologist using Rechtschaffen and Kales rules. This substantiates the applicability of the developed classification system to clinical diagnosis.

  10. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  11. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  12. Locally linear embedding (LLE) for MRI based Alzheimer's disease classification.

    PubMed

    Liu, Xin; Tosun, Duygu; Weiner, Michael W; Schuff, Norbert

    2013-12-01

    Modern machine learning algorithms are increasingly being used in neuroimaging studies, such as the prediction of Alzheimer's disease (AD) from structural MRI. However, finding a good representation for multivariate brain MRI features in which their essential structure is revealed and easily extractable has been difficult. We report a successful application of a machine learning framework that significantly improved the use of brain MRI for predictions. Specifically, we used the unsupervised learning algorithm of local linear embedding (LLE) to transform multivariate MRI data of regional brain volume and cortical thickness to a locally linear space with fewer dimensions, while also utilizing the global nonlinear data structure. The embedded brain features were then used to train a classifier for predicting future conversion to AD based on a baseline MRI. We tested the approach on 413 individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI) who had baseline MRI scans and complete clinical follow-ups over 3 years with the following diagnoses: cognitive normal (CN; n=137), stable mild cognitive impairment (s-MCI; n=93), MCI converters to AD (c-MCI, n=97), and AD (n=86). We found that classifications using embedded MRI features generally outperformed (p<0.05) classifications using the original features directly. Moreover, the improvement from LLE was not limited to a particular classifier but worked equally well for regularized logistic regressions, support vector machines, and linear discriminant analysis. Most strikingly, using LLE significantly improved (p=0.007) predictions of MCI subjects who converted to AD and those who remained stable (accuracy/sensitivity/specificity: =0.68/0.80/0.56). In contrast, predictions using the original features performed not better than by chance (accuracy/sensitivity/specificity: =0.56/0.65/0.46). In conclusion, LLE is a very effective tool for classification studies of AD using multivariate MRI data. The improvement

  13. Classification of types of stuttering symptoms based on brain activity.

    PubMed

    Jiang, Jing; Lu, Chunming; Peng, Danling; Zhu, Chaozhe; Howell, Peter

    2012-01-01

    Among the non-fluencies seen in speech, some are more typical (MT) of stuttering speakers, whereas others are less typical (LT) and are common to both stuttering and fluent speakers. No neuroimaging work has evaluated the neural basis for grouping these symptom types. Another long-debated issue is which type (LT, MT) whole-word repetitions (WWR) should be placed in. In this study, a sentence completion task was performed by twenty stuttering patients who were scanned using an event-related design. This task elicited stuttering in these patients. Each stuttered trial from each patient was sorted into the MT or LT types with WWR put aside. Pattern classification was employed to train a patient-specific single trial model to automatically classify each trial as MT or LT using the corresponding fMRI data. This model was then validated by using test data that were independent of the training data. In a subsequent analysis, the classification model, just established, was used to determine which type the WWR should be placed in. The results showed that the LT and the MT could be separated with high accuracy based on their brain activity. The brain regions that made most contribution to the separation of the types were: the left inferior frontal cortex and bilateral precuneus, both of which showed higher activity in the MT than in the LT; and the left putamen and right cerebellum which showed the opposite activity pattern. The results also showed that the brain activity for WWR was more similar to that of the LT and fluent speech than to that of the MT. These findings provide a neurological basis for separating the MT and the LT types, and support the widely-used MT/LT symptom grouping scheme. In addition, WWR play a similar role as the LT, and thus should be placed in the LT type.

  14. An innovative blazar classification based on radio jet kinematics

    NASA Astrophysics Data System (ADS)

    Hervet, O.; Boisson, C.; Sol, H.

    2016-07-01

    Context. Blazars are usually classified following their synchrotron peak frequency (νF(ν) scale) as high, intermediate, low frequency peaked BL Lacs (HBLs, IBLs, LBLs), and flat spectrum radio quasars (FSRQs), or, according to their radio morphology at large scale, FR I or FR II. However, the diversity of blazars is such that these classes seem insufficient to chart the specific properties of each source. Aims: We propose to classify a wide sample of blazars following the kinematic features of their radio jets seen in very long baseline interferometry (VLBI). Methods: For this purpose we use public data from the MOJAVE collaboration in which we select a sample of blazars with known redshift and sufficient monitoring to constrain apparent velocities. We selected 161 blazars from a sample of 200 sources. We identify three distinct classes of VLBI jets depending on radio knot kinematics: class I with quasi-stationary knots, class II with knots in relativistic motion from the radio core, and class I/II, intermediate, showing quasi-stationary knots at the jet base and relativistic motions downstream. Results: A notable result is the good overlap of this kinematic classification with the usual spectral classification; class I corresponds to HBLs, class II to FSRQs, and class I/II to IBLs/LBLs. We deepen this study by characterizing the physical parameters of jets from VLBI radio data. Hence we focus on the singular case of the class I/II by the study of the blazar BL Lac itself. Finally we show how the interpretation that radio knots are recollimation shocks is fully appropriate to describe the characteristics of these three classes.

  15. Classification of Types of Stuttering Symptoms Based on Brain Activity

    PubMed Central

    Jiang, Jing; Lu, Chunming; Peng, Danling; Zhu, Chaozhe; Howell, Peter

    2012-01-01

    Among the non-fluencies seen in speech, some are more typical (MT) of stuttering speakers, whereas others are less typical (LT) and are common to both stuttering and fluent speakers. No neuroimaging work has evaluated the neural basis for grouping these symptom types. Another long-debated issue is which type (LT, MT) whole-word repetitions (WWR) should be placed in. In this study, a sentence completion task was performed by twenty stuttering patients who were scanned using an event-related design. This task elicited stuttering in these patients. Each stuttered trial from each patient was sorted into the MT or LT types with WWR put aside. Pattern classification was employed to train a patient-specific single trial model to automatically classify each trial as MT or LT using the corresponding fMRI data. This model was then validated by using test data that were independent of the training data. In a subsequent analysis, the classification model, just established, was used to determine which type the WWR should be placed in. The results showed that the LT and the MT could be separated with high accuracy based on their brain activity. The brain regions that made most contribution to the separation of the types were: the left inferior frontal cortex and bilateral precuneus, both of which showed higher activity in the MT than in the LT; and the left putamen and right cerebellum which showed the opposite activity pattern. The results also showed that the brain activity for WWR was more similar to that of the LT and fluent speech than to that of the MT. These findings provide a neurological basis for separating the MT and the LT types, and support the widely-used MT/LT symptom grouping scheme. In addition, WWR play a similar role as the LT, and thus should be placed in the LT type. PMID:22761887

  16. A classification algorithm based on Cloude decomposition model for fully polarimetric SAR image

    NASA Astrophysics Data System (ADS)

    Xiang, Hongmao; Liu, Shanwei; Zhuang, Ziqi; Zhang, Naixin

    2016-11-01

    Remote sensing is an important technology for monitoring coastal zone, but it is difficult to get effective optical data in cloudy or rainy weather. SAR is an important data source for monitoring the coastal zone because it cannot be restricted in all-weather. Fully polarimetric SAR data is more abundant than single polarization and multi-polarization SAR data. The experiment selected the fully polarimetric SAR image of Radarsat-2, which covered the Yellow River Estuary. In view of the features of the study area, we carried out the H/ α unsupervised classification, the H/ α -Wishart unsupervised classification and the H/ α -Wishart unsupervised classification based on the results of Cloude decomposition. A new classification method is proposed which used the Wishart supervised classification based on the result of H/ α -Wishart unsupervised classification. The experimental results showed that the new method effectively overcome the shortcoming of unsupervised classification and improved the classification accuracy significantly. It was also shown that the classification result of SAR image had the similar precision with that of Landsat-7 image by the same classification method, SAR image had a better precision of water classification due to its sensitivity for water, and Landsat-7 image had a better precision of vegetation types.

  17. Sequence-based classification and identification of Fungi.

    PubMed

    Hibbett, David; Abarenkov, Kessy; Kõljalg, Urmas; Öpik, Maarja; Chai, Benli; Cole, James; Wang, Qiong; Crous, Pedro; Robert, Vincent; Helgason, Thorunn; Herr, Joshua R; Kirk, Paul; Lueschow, Shiloh; O'Donnell, Kerry; Nilsson, R Henrik; Oono, Ryoko; Schoch, Conrad; Smyth, Christopher; Walker, Donald M; Porras-Alfaro, Andrea; Taylor, John W; Geiser, David M

    Fungal taxonomy and ecology have been revolutionized by the application of molecular methods and both have increasing connections to genomics and functional biology. However, data streams from traditional specimen- and culture-based systematics are not yet fully integrated with those from metagenomic and metatranscriptomic studies, which limits understanding of the taxonomic diversity and metabolic properties of fungal communities. This article reviews current resources, needs, and opportunities for sequence-based classification and identification (SBCI) in fungi as well as related efforts in prokaryotes. To realize the full potential of fungal SBCI it will be necessary to make advances in multiple areas. Improvements in sequencing methods, including long-read and single-cell technologies, will empower fungal molecular ecologists to look beyond ITS and current shotgun metagenomics approaches. Data quality and accessibility will be enhanced by attention to data and metadata standards and rigorous enforcement of policies for deposition of data and workflows. Taxonomic communities will need to develop best practices for molecular characterization in their focal clades, while also contributing to globally useful datasets including ITS. Changes to nomenclatural rules are needed to enable validPUBLICation of sequence-based taxon descriptions. Finally, cultural shifts are necessary to promote adoption of SBCI and to accord professional credit to individuals who contribute to community resources.

  18. Automated classification of mouse pup isolation syllables: from cluster analysis to an Excel-based "mouse pup syllable classification calculator".

    PubMed

    Grimsley, Jasmine M S; Gadziola, Marie A; Wenstrup, Jeffrey J

    2012-01-01

    Mouse pups vocalize at high rates when they are cold or isolated from the nest. The proportions of each syllable type produced carry information about disease state and are being used as behavioral markers for the internal state of animals. Manual classifications of these vocalizations identified 10 syllable types based on their spectro-temporal features. However, manual classification of mouse syllables is time consuming and vulnerable to experimenter bias. This study uses an automated cluster analysis to identify acoustically distinct syllable types produced by CBA/CaJ mouse pups, and then compares the results to prior manual classification methods. The cluster analysis identified two syllable types, based on their frequency bands, that have continuous frequency-time structure, and two syllable types featuring abrupt frequency transitions. Although cluster analysis computed fewer syllable types than manual classification, the clusters represented well the probability distributions of the acoustic features within syllables. These probability distributions indicate that some of the manually classified syllable types are not statistically distinct. The characteristics of the four classified clusters were used to generate a Microsoft Excel-based mouse syllable classifier that rapidly categorizes syllables, with over a 90% match, into the syllable types determined by cluster analysis.

  19. Text Passage Retrieval Based on Colon Classification: Retrieval Performance.

    ERIC Educational Resources Information Center

    Shepherd, Michael A.

    1981-01-01

    Reports the results of experiments using colon classification for the analysis, representation, and retrieval of primary information from the full text of documents. Recall, precision, and search length measures indicate colon classification did not perform significantly better than Boolean or simple word occurrence systems. Thirteen references…

  20. Classification Based on Tree-Structured Allocation Rules

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Qui

    2008-01-01

    The authors consider the problem of classifying an unknown observation into 1 of several populations by using tree-structured allocation rules. Although many parametric classification procedures are robust to certain assumption violations, there is need for classification procedures that can be used regardless of the group-conditional…

  1. Classification of LANDSAT agricultural data based upon color trends

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.

    1977-01-01

    An automated classification procedure is described. The decision rules were developed for classifying an unknown observation by matching its color trend with that of expected trends for known crops. The results of this procedure were found to be encouraging when compared with the usual supervised classification procedures.

  2. Classification Based on Tree-Structured Allocation Rules

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Qui

    2008-01-01

    The authors consider the problem of classifying an unknown observation into 1 of several populations by using tree-structured allocation rules. Although many parametric classification procedures are robust to certain assumption violations, there is need for classification procedures that can be used regardless of the group-conditional…

  3. Decomposition-based transfer distance metric learning for image classification.

    PubMed

    Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao

    2014-09-01

    Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.

  4. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  5. Agent-Based Chemical Plume Tracing Using Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zarzhitsky, Dimitri; Spears, Diana; Thayer, David; Spears, William

    2004-01-01

    This paper presents a rigorous evaluation of a novel, distributed chemical plume tracing algorithm. The algorithm is a combination of the best aspects of the two most popular predecessors for this task. Furthermore, it is based on solid, formal principles from the field of fluid mechanics. The algorithm is applied by a network of mobile sensing agents (e.g., robots or micro-air vehicles) that sense the ambient fluid velocity and chemical concentration, and calculate derivatives. The algorithm drives the robotic network to the source of the toxic plume, where measures can be taken to disable the source emitter. This work is part of a much larger effort in research and development of a physics-based approach to developing networks of mobile sensing agents for monitoring, tracking, reporting and responding to hazardous conditions.

  6. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  7. Agent-based models in translational systems biology

    PubMed Central

    An, Gary; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram

    2013-01-01

    Effective translational methodologies for knowledge representation are needed in order to make strides against the constellation of diseases that affect the world today. These diseases are defined by their mechanistic complexity, redundancy, and nonlinearity. Translational systems biology aims to harness the power of computational simulation to streamline drug/device design, simulate clinical trials, and eventually to predict the effects of drugs on individuals. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggests that this modeling framework is well suited for translational systems biology. This review describes agent-based modeling and gives examples of its translational applications in the context of acute inflammation and wound healing. PMID:20835989

  8. [Hard and soft classification method of multi-spectral remote sensing image based on adaptive thresholds].

    PubMed

    Hu, Tan-Gao; Xu, Jun-Feng; Zhang, Deng-Rong; Wang, Jie; Zhang, Yu-Zhou

    2013-04-01

    Hard and soft classification techniques are the conventional methods of image classification for satellite data, but they have their own advantages and drawbacks. In order to obtain accurate classification results, we took advantages of both traditional hard classification methods (HCM) and soft classification models (SCM), and developed a new method called the hard and soft classification model (HSCM) based on adaptive threshold calculation. The authors tested the new method in land cover mapping applications. According to the results of confusion matrix, the overall accuracy of HCM, SCM, and HSCM is 71.06%, 67.86%, and 71.10%, respectively. And the kappa coefficient is 60.03%, 56.12%, and 60.07%, respectively. Therefore, the HSCM is better than HCM and SCM. Experimental results proved that the new method can obviously improve the land cover and land use classification accuracy.

  9. Human cancer classification: a systems biology- based model integrating morphology, cancer stem cells, proteomics, and genomics.

    PubMed

    Idikio, Halliday A

    2011-02-22

    Human cancer classification is currently based on the idea of cell of origin, light and electron microscopic attributes of the cancer. What is not yet integrated into cancer classification are the functional attributes of these cancer cells. Recent innovative techniques in biology have provided a wealth of information on the genomic, transcriptomic and proteomic changes in cancer cells. The emergence of the concept of cancer stem cells needs to be included in a classification model to capture the known attributes of cancer stem cells and their potential contribution to treatment response, and metastases. The integrated model of cancer classification presented here incorporates all morphology, cancer stem cell contributions, genetic, and functional attributes of cancer. Integrated cancer classification models could eliminate the unclassifiable cancers as used in current classifications. Future cancer treatment may be advanced by using an integrated model of cancer classification.

  10. Endogenizing geopolitical boundaries with agent-based modeling

    PubMed Central

    Cederman, Lars-Erik

    2002-01-01

    Agent-based modeling promises to overcome the reification of actors. Whereas this common, but limiting, assumption makes a lot of sense during periods characterized by stable actor boundaries, other historical junctures, such as the end of the Cold War, exhibit far-reaching and swift transformations of actors' spatial and organizational existence. Moreover, because actors cannot be assumed to remain constant in the long run, analysis of macrohistorical processes virtually always requires “sociational” endogenization. This paper presents a series of computational models, implemented with the software package REPAST, which trace complex macrohistorical transformations of actors be they hierarchically organized as relational networks or as collections of symbolic categories. With respect to the former, dynamic networks featuring emergent compound actors with agent compartments represented in a spatial grid capture organizational domination of the territorial state. In addition, models of “tagged” social processes allows the analyst to show how democratic states predicate their behavior on categorical traits. Finally, categorical schemata that select out politically relevant cultural traits in ethnic landscapes formalize a constructivist notion of national identity in conformance with the qualitative literature on nationalism. This “finite-agent method”, representing both states and nations as higher-level structures superimposed on a lower-level grid of primitive agents or cultural traits, avoids reification of agency. Furthermore, it opens the door to explicit analysis of entity processes, such as the integration and disintegration of actors as well as boundary transformations. PMID:12011409

  11. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    NASA Astrophysics Data System (ADS)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.

  12. Phosphoramidate-based Peptidomimetic Prostate Cancer PET Imaging Agents

    DTIC Science & Technology

    2013-07-01

    develop a PET imaging agent based on modifying the peptidomimetic PSMA inhibitor which will result in improved tumor uptake and clearance mechanism...Different fluorination approaches were attempted with PSMA module compounds such as direct labeling, cupper free chemistry and the use of...labeling approaches are established, and then the labeling of the modified PSMA inhibitor analogues will be investigated in vitro as well as in vivo. 15

  13. Phosphoramidate-based Peptidomimetic Prostate Cancer PET Imaging Agents

    DTIC Science & Technology

    2013-11-01

    goal is to develop a PET imaging agent based on modifying the peptidomimetic PSMA inhibitor which will result in improved tumor uptake and clearance...mechanism. Different fluorination approaches were attempted with PSMA module compounds such as direct labeling, cupper free chemistry and the use of...the labeling approaches are established, and then the labeling of the modified PSMA inhibitor analogues will be investigated in vitro as well as in

  14. Investigating biocomplexity through the agent-based paradigm

    PubMed Central

    Kaul, Himanshu

    2015-01-01

    Capturing the dynamism that pervades biological systems requires a computational approach that can accommodate both the continuous features of the system environment as well as the flexible and heterogeneous nature of component interactions. This presents a serious challenge for the more traditional mathematical approaches that assume component homogeneity to relate system observables using mathematical equations. While the homogeneity condition does not lead to loss of accuracy while simulating various continua, it fails to offer detailed solutions when applied to systems with dynamically interacting heterogeneous components. As the functionality and architecture of most biological systems is a product of multi-faceted individual interactions at the sub-system level, continuum models rarely offer much beyond qualitative similarity. Agent-based modelling is a class of algorithmic computational approaches that rely on interactions between Turing-complete finite-state machines—or agents—to simulate, from the bottom-up, macroscopic properties of a system. In recognizing the heterogeneity condition, they offer suitable ontologies to the system components being modelled, thereby succeeding where their continuum counterparts tend to struggle. Furthermore, being inherently hierarchical, they are quite amenable to coupling with other computational paradigms. The integration of any agent-based framework with continuum models is arguably the most elegant and precise way of representing biological systems. Although in its nascence, agent-based modelling has been utilized to model biological complexity across a broad range of biological scales (from cells to societies). In this article, we explore the reasons that make agent-based modelling the most precise approach to model biological systems that tend to be non-linear and complex. PMID:24227161

  15. Thrombin-Based Hemostatic Agent in Primary Total Knee Arthroplasty.

    PubMed

    Fu, Xin; Tian, Peng; Xu, Gui-Jun; Sun, Xiao-Lei; Ma, Xin-Long

    2017-02-01

    The present meta-analysis pooled the results from randomized controlled trials (RCTs) to identify and assess the efficacy and safety of thrombin-based hemostatic agent in primary total knee arthroplasty (TKA). Potential academic articles were identified from the Cochrane Library, Medline (1966-2015.5), PubMed (1966-2015.5), Embase (1980-2015.5), and ScienceDirect (1966-2015.5). Relevant journals and the recommendations of expert panels were also searched by using Google search engine. RCTs assessing the efficacy and safety of thrombin-based hemostatic agent in primary TKA were included. Pooling of data was analyzed by RevMan 5.1 (The Cochrane Collaboration, Oxford, UK). A total of four RCTs met the inclusion criteria. The meta-analysis revealed significant differences in postoperative hemoglobin decline (p < 0.00001), total blood loss (p < 0.00001), drainage volume (p = 0.01), and allogenic blood transfusion (p = 0.01) between the treatment group and the control group. No significant differences were found regarding incidence of infection (p = 0.45) and deep vein thrombosis (DVT; p = 0.80) between the groups. Meta-analysis indicated that the application of thrombin-based hemostatic agent before wound closure decreased postoperative hemoglobin decline, drainage volume, total blood loss, and transfusion rate and did not increase the risk of infection, DVT, or other complications. Therefore, the reviewers believe that thrombin-based hemostatic agent is effective and safe in primary TKA. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  16. A spectral-spatial kernel-based method for hyperspectral imagery classification

    NASA Astrophysics Data System (ADS)

    Li, Li; Ge, Hongwei; Gao, Jianqiang

    2017-02-01

    Spectral-based classification methods have gained increasing attention in hyperspectral imagery classification. Nevertheless, the spectral cannot fully represent the inherent spatial distribution of the imagery. In this paper, a spectral-spatial kernel-based method for hyperspectral imagery classification is proposed. Firstly, the spatial feature was extracted by using area median filtering (AMF). Secondly, the result of the AMF was used to construct spatial feature patch according to different window sizes. Finally, using the kernel technique, the spectral feature and the spatial feature were jointly used for the classification through a support vector machine (SVM) formulation. Therefore, for hyperspectral imagery classification, the proposed method was called spectral-spatial kernel-based support vector machine (SSF-SVM). To evaluate the proposed method, experiments are performed on three hyperspectral images. The experimental results show that an improvement is possible with the proposed technique in most of the real world classification problems.

  17. Classification of Histological Images Based on the Stationary Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Nascimento, M. Z.; Neves, L.; Duarte, S. C.; Duarte, Y. A. S.; Ramos Batista, V.

    2015-01-01

    Non-Hodgkin lymphomas are of many distinct types, and different classification systems make it difficult to diagnose them correctly. Many of these systems classify lymphomas only based on what they look like under a microscope. In 2008 the World Health Organisation (WHO) introduced the most recent system, which also considers the chromosome features of the lymphoma cells and the presence of certain proteins on their surface. The WHO system is the one that we apply in this work. Herewith we present an automatic method to classify histological images of three types of non-Hodgkin lymphoma. Our method is based on the Stationary Wavelet Transform (SWT), and it consists of three steps: 1) extracting sub-bands from the histological image through SWT, 2) applying Analysis of Variance (ANOVA) to clean noise and select the most relevant information, 3) classifying it by the Support Vector Machine (SVM) algorithm. The kernel types Linear, RBF and Polynomial were evaluated with our method applied to 210 images of lymphoma from the National Institute on Aging. We concluded that the following combination led to the most relevant results: detail sub-band, ANOVA and SVM with Linear and RBF kernels.

  18. Toward a Safety Risk-Based Classification of Unmanned Aircraft

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2016-01-01

    There is a trend of growing interest and demand for greater access of unmanned aircraft (UA) to the National Airspace System (NAS) as the ongoing development of UA technology has created the potential for significant economic benefits. However, the lack of a comprehensive and efficient UA regulatory framework has constrained the number and kinds of UA operations that can be performed. This report presents initial results of a study aimed at defining a safety-risk-based UA classification as a plausible basis for a regulatory framework for UA operating in the NAS. Much of the study up to this point has been at a conceptual high level. The report includes a survey of contextual topics, analysis of safety risk considerations, and initial recommendations for a risk-based approach to safe UA operations in the NAS. The next phase of the study will develop and leverage deeper clarity and insight into practical engineering and regulatory considerations for ensuring that UA operations have an acceptable level of safety.

  19. Superpixel-based classification of gastric chromoendoscopy images

    NASA Astrophysics Data System (ADS)

    Boschetto, Davide; Grisan, Enrico

    2017-03-01

    Chromoendoscopy (CH) is a gastroenterology imaging modality that involves the staining of tissues with methylene blue, which reacts with the internal walls of the gastrointestinal tract, improving the visual contrast in mucosal surfaces and thus enhancing a doctor's ability to screen precancerous lesions or early cancer. This technique helps identify areas that can be targeted for biopsy or treatment and in this work we will focus on gastric cancer detection. Gastric chromoendoscopy for cancer detection has several taxonomies available, one of which classifies CH images into three classes (normal, metaplasia, dysplasia) based on color, shape and regularity of pit patterns. Computer-assisted diagnosis is desirable to help us improve the reliability of the tissue classification and abnormalities detection. However, traditional computer vision methodologies, mainly segmentation, do not translate well to the specific visual characteristics of a gastroenterology imaging scenario. We propose the exploitation of a first unsupervised segmentation via superpixel, which groups pixels into perceptually meaningful atomic regions, used to replace the rigid structure of the pixel grid. For each superpixel, a set of features is extracted and then fed to a random forest based classifier, which computes a model used to predict the class of each superpixel. The average general accuracy of our model is 92.05% in the pixel domain (86.62% in the superpixel domain), while detection accuracies on the normal and abnormal class are respectively 85.71% and 95%. Eventually, the whole image class can be predicted image through a majority vote on each superpixel's predicted class.

  20. Automatic classification for pathological prostate images based on fractal analysis.

    PubMed

    Huang, Po-Whei; Lee, Cheng-Hsiung

    2009-07-01

    Accurate grading for prostatic carcinoma in pathological images is important to prognosis and treatment planning. Since human grading is always time-consuming and subjective, this paper presents a computer-aided system to automatically grade pathological images according to Gleason grading system which is the most widespread method for histological grading of prostate tissues. We proposed two feature extraction methods based on fractal dimension to analyze variations of intensity and texture complexity in regions of interest. Each image can be classified into an appropriate grade by using Bayesian, k-NN, and support vector machine (SVM) classifiers, respectively. Leave-one-out and k-fold cross-validation procedures were used to estimate the correct classification rates (CCR). Experimental results show that 91.2%, 93.7%, and 93.7% CCR can be achieved by Bayesian, k-NN, and SVM classifiers, respectively, for a set of 205 pathological prostate images. If our fractal-based feature set is optimized by the sequential floating forward selection method, the CCR can be promoted up to 94.6%, 94.2%, and 94.6%, respectively, using each of the above three classifiers. Experimental results also show that our feature set is better than the feature sets extracted from multiwavelets, Gabor filters, and gray-level co-occurrence matrix methods because it has a much smaller size and still keeps the most powerful discriminating capability in grading prostate images.

  1. Image classification based on scheme of principal node analysis

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Ma, Zheng; Xie, Mei

    2016-11-01

    This paper presents a scheme of principal node analysis (PNA) with the aim to improve the representativeness of the learned codebook so as to enhance the classification rate of scene image. Original images are normalized into gray ones and the scale-invariant feature transform (SIFT) descriptors are extracted from each image in the preprocessing stage. Then, the PNA-based scheme is applied to the SIFT descriptors with iteration and selection algorithms. The principal nodes of each image are selected through spatial analysis of the SIFT descriptors with Manhattan distance (L1 norm) and Euclidean distance (L2 norm) in order to increase the representativeness of the codebook. With the purpose of evaluating the performance of our scheme, the feature vector of the image is calculated by two baseline methods after the codebook is constructed. The L1-PNA- and L2-PNA-based baseline methods are tested and compared with different scales of codebooks over three public scene image databases. The experimental results show the effectiveness of the proposed scheme of PNA with a higher categorization rate.

  2. Investigating the feasibility of a BCI-driven robot-based writing agent for handicapped individuals

    NASA Astrophysics Data System (ADS)

    Syan, Chanan S.; Harnarinesingh, Randy E. S.; Beharry, Rishi

    2014-07-01

    Brain-Computer Interfaces (BCIs) predominantly employ output actuators such as virtual keyboards and wheelchair controllers to enable handicapped individuals to interact and communicate with their environment. However, BCI-based assistive technologies are limited in their application. There is minimal research geared towards granting disabled individuals the ability to communicate using written words. This is a drawback because involving a human attendant in writing tasks can entail a breach of personal privacy where the task entails sensitive and private information such as banking matters. BCI-driven robot-based writing however can provide a safeguard for user privacy where it is required. This study investigated the feasibility of a BCI-driven writing agent using the 3 degree-of- freedom Phantom Omnibot. A full alphanumerical English character set was developed and validated using a teach pendant program in MATLAB. The Omnibot was subsequently interfaced to a P300-based BCI. Three subjects utilised the BCI in the online context to communicate words to the writing robot over a Local Area Network (LAN). The average online letter-wise classification accuracy was 91.43%. The writing agent legibly constructed the communicated letters with minor errors in trajectory execution. The developed system therefore provided a feasible platform for BCI-based writing.

  3. The groningen laryngomalacia classification system--based on systematic review and dynamic airway changes.

    PubMed

    van der Heijden, Martijn; Dikkers, Frederik G; Halmos, Gyorgy B

    2015-12-01

    Laryngomalacia is the most common cause of dyspnea and stridor in newborn infants. Laryngomalacia is a dynamic change of the upper airway based on abnormally pliable supraglottic structures, which causes upper airway obstruction. In the past, different classification systems have been introduced. Until now no classification system is widely accepted and applied. Our goal is to provide a simple and complete classification system based on systematic literature search and our experiences. Retrospective cohort study with literature review. All patients with laryngomalacia under the age of 5 at time of diagnosis were included. Photo and video documentation was used to confirm diagnosis and characteristics of dynamic airway change. Outcome was compared with available classification systems in literature. Eighty-five patients were included. In contrast to other classification systems, only three typical different dynamic changes have been identified in our series. Two existing classification systems covered 100% of our findings, but there was an unnecessary overlap between different types in most of the systems. Based on our finding, we propose a new a classification system for laryngomalacia, which is purely based on dynamic airway changes. The groningen laryngomalacia classification is a new, simplified classification system with three types, based on purely dynamic laryngeal changes, tested in a tertiary referral center: Type 1: inward collapse of arytenoids cartilages, Type 2: medial displacement of aryepiglottic folds, and Type 3: posterocaudal displacement of epiglottis against the posterior pharyngeal wall. © 2015 Wiley Periodicals, Inc.

  4. Classification of Polarimetric SAR Image Based on the Subspace Method

    NASA Astrophysics Data System (ADS)

    Xu, J.; Li, Z.; Tian, B.; Chen, Q.; Zhang, P.

    2013-07-01

    Land cover classification is one of the most significant applications in remote sensing. Compared to optical sensing technologies, synthetic aperture radar (SAR) can penetrate through clouds and have all-weather capabilities. Therefore, land cover classification for SAR image is important in remote sensing. The subspace method is a novel method for the SAR data, which reduces data dimensionality by incorporating feature extraction into the classification process. This paper uses the averaged learning subspace method (ALSM) method that can be applied to the fully polarimetric SAR image for classification. The ALSM algorithm integrates three-component decomposition, eigenvalue/eigenvector decomposition and textural features derived from the gray-level cooccurrence matrix (GLCM). The study site, locates in the Dingxing county, in Hebei Province, China. We compare the subspace method with the traditional supervised Wishart classification. By conducting experiments on the fully polarimetric Radarsat-2 image, we conclude the proposed method yield higher classification accuracy. Therefore, the ALSM classification method is a feasible and alternative method for SAR image.

  5. Classification in psychiatry: from a symptom based to a cause based model?

    PubMed

    Pritchard, Dylan

    2015-09-01

    The assumption that eventually the classification in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM) will incorporate aspects of causation uncovered by research in neuroscience is examined in view of the National Institute of Mental Health's NIMH Research Domain Criteria (RDoC) project. I argue that significant advantages of maintaining the classification system, focussed on grouped descriptions of symptoms, are often undervalued or not considered. In this paper I will challenge the standard view that the transition from the purely symptom based approach is an inevitable and desirable change.

  6. An agent-based microsimulation of critical infrastructure systems

    SciTech Connect

    BARTON,DIANNE C.; STAMBER,KEVIN L.

    2000-03-29

    US infrastructures provide essential services that support the economic prosperity and quality of life. Today, the latest threat to these infrastructures is the increasing complexity and interconnectedness of the system. On balance, added connectivity will improve economic efficiency; however, increased coupling could also result in situations where a disturbance in an isolated infrastructure unexpectedly cascades across diverse infrastructures. An understanding of the behavior of complex systems can be critical to understanding and predicting infrastructure responses to unexpected perturbation. Sandia National Laboratories has developed an agent-based model of critical US infrastructures using time-dependent Monte Carlo methods and a genetic algorithm learning classifier system to control decision making. The model is currently under development and contains agents that represent the several areas within the interconnected infrastructures, including electric power and fuel supply. Previous work shows that agent-based simulations models have the potential to improve the accuracy of complex system forecasting and to provide new insights into the factors that are the primary drivers of emergent behaviors in interdependent systems. Simulation results can be examined both computationally and analytically, offering new ways of theorizing about the impact of perturbations to an infrastructure network.

  7. Domination and evolution in agent based model of an economy

    NASA Astrophysics Data System (ADS)

    Kazmi, Syed S.

    We introduce Agent Based Model of a pure exchange economy and a simple economy that includes production, consumption and distributions. Markets are described by Edgeworth Exchange in both models. Trades are binary bilateral trades at prices that are set in each trade. We found that the prices converge over time to a value that is not the standard Equilibrium value given by the Walrasian Tattonement fiction. The average price, and the distributions of Wealth, depends on the degree of Domination (persuasive power) we introduced based on differentials in trading "leverage" due to wealth differences. The full economy model is allowed to evolve by replacement of agents that do not survive with agents having random properties. We found that, depending upon the average productivity compared to the average consumption, very different kinds of behavior emerged. The Economy as a whole reaches a steady state by the population adapting to the conditions of productivity and consumption. Correlations develop in a population between what would be for each individual a random assignment of Productivity, Labor power, Wealth, and Preferences. The population adapts to the economic environment by development of these Correlations and without any learning process. We see signs of emerging social structure as a result of necessity of survival.

  8. Amino acid–based surfactants: New antimicrobial agents.

    PubMed

    Pinazo, A; Manresa, M A; Marques, A M; Bustelo, M; Espuny, M J; Pérez, L

    2016-02-01

    The rapid increase of drug resistant bacteria makes necessary the development of new antimicrobial agents. Synthetic amino acid-based surfactants constitute a promising alternative to conventional antimicrobial compounds given that they can be prepared from renewable raw materials. In this review, we discuss the structural features that promote antimicrobial activity of amino acid-based surfactants. Monocatenary, dicatenary and gemini surfactants that contain different amino acids on the polar head and show activity against bacteria are revised. The synthesis and basic physico-chemical properties have also been included.

  9. Agent-based modeling: case study in cleavage furrow models.

    PubMed

    Mogilner, Alex; Manhart, Angelika

    2016-11-07

    The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as "differential equation based" (DE) or "agent based" (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem-positioning of the cleavage furrow in dividing cells-to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches.

  10. Application of wavelet transformation and adaptive neighborhood based modified backpropagation (ANMBP) for classification of brain cancer

    NASA Astrophysics Data System (ADS)

    Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry

    2017-08-01

    This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.

  11. Fast L1-based sparse representation of EEG for motor imagery signal classification.

    PubMed

    Younghak Shin; Heung-No Lee; Balasingham, Ilangko

    2016-08-01

    Improvement of classification performance is one of the key challenges in electroencephalogram (EEG) based motor imagery brain-computer interface (BCI). Recently, sparse representation based classification (SRC) method has been shown to provide satisfactory classification accuracy in motor imagery classification. In this paper, we aim to evaluate the performance of the SRC method in terms of not only its classification accuracy but also of its computation time. For this purpose, we investigate the performance of recently developed fast L1 minimization methods for their use in SRC, such as homotopy and fast iterative soft-thresholding algorithm (FISTA). From experimental analysis, we note that the SRC method with the fast L1 minimization algorithms is shown to provide robust classification performance, compared to support vector machine (SVM), both in time and accuracy.

  12. A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina

    The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.

  13. Study of RS data classification based on rough sets and C4.5 algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Ming; Ai, Ting-hua

    2009-10-01

    The classification by extracting of remote sensing (RS) data is the primary information source for GIS in land resource application. Automatic and accurate mapping of region LUCC from high spatial resolution satellite image is still a challenge. The paper discussed remote sensing image data classification techniques based on C4.5 algorithm and rough sets and the combination of C4.5 algorithm and rough sets. On the basis of the theories and methods of spatial data mining, we improve the classification accuracy. Finally validates its effectiveness taking a test area as example. We took the outskirts of Fuzhou with complicated land use in Fujian Province as study area. The classification rules are discovered from the samples through decision tree C4.5 algorithm, Rough Sets and both with together, which integrates spectral, textural and the topography characters. And the classification test is performed based on these rules. The traditional maximum likelihood classification is also compared to check the classification accuracy. The results have shown that the accuracy of classification based on knowledge is markedly higher than the traditional maximum likelihood classification. Especially the method based on combine Rough Sets and decision tree C4.5 algorithm is the best.

  14. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    NASA Astrophysics Data System (ADS)

    di Clemente, Riccardo; Pietronero, Luciano

    2012-07-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  15. Recent Advances on Inorganic Nanoparticle-Based Cancer Therapeutic Agents

    PubMed Central

    Wang, Fenglin; Li, Chengyao; Cheng, Jing; Yuan, Zhiqin

    2016-01-01

    Inorganic nanoparticles have been widely investigated as therapeutic agents for cancer treatments in biomedical fields due to their unique physical/chemical properties, versatile synthetic strategies, easy surface functionalization and excellent biocompatibility. This review focuses on the discussion of several types of inorganic nanoparticle-based cancer therapeutic agents, including gold nanoparticles, magnetic nanoparticles, upconversion nanoparticles and mesoporous silica nanoparticles. Several cancer therapy techniques are briefly introduced at the beginning. Emphasis is placed on how these inorganic nanoparticles can provide enhanced therapeutic efficacy in cancer treatment through site-specific accumulation, targeted drug delivery and stimulated drug release, with elaborations on several examples to highlight the respective strategies adopted. Finally, a brief summary and future challenges are included. PMID:27898016

  16. Agent-Based Modeling of Noncommunicable Diseases: A Systematic Review

    PubMed Central

    Arah, Onyebuchi A.

    2015-01-01

    We reviewed the use of agent-based modeling (ABM), a systems science method, in understanding noncommunicable diseases (NCDs) and their public health risk factors. We systematically reviewed studies in PubMed, ScienceDirect, and Web of Sciences published from January 2003 to July 2014. We retrieved 22 relevant articles; each had an observational or interventional design. Physical activity and diet were the most-studied outcomes. Often, single agent types were modeled, and the environment was usually irrelevant to the studied outcome. Predictive validation and sensitivity analyses were most used to validate models. Although increasingly used to study NCDs, ABM remains underutilized and, where used, is suboptimally reported in public health studies. Its use in studying NCDs will benefit from clarified best practices and improved rigor to establish its usefulness and facilitate replication, interpretation, and application. PMID:25602871

  17. Tissue-based standoff biosensors for detecting chemical warfare agents

    DOEpatents

    Greenbaum, Elias; Sanders, Charlene A.

    2003-11-18

    A tissue-based, deployable, standoff air quality sensor for detecting the presence of at least one chemical or biological warfare agent, includes: a cell containing entrapped photosynthetic tissue, the cell adapted for analyzing photosynthetic activity of the entrapped photosynthetic tissue; means for introducing an air sample into the cell and contacting the air sample with the entrapped photosynthetic tissue; a fluorometer in operable relationship with the cell for measuring photosynthetic activity of the entrapped photosynthetic tissue; and transmitting means for transmitting analytical data generated by the fluorometer relating to the presence of at least one chemical or biological warfare agent in the air sample, the sensor adapted for deployment into a selected area.

  18. Agent-based model to rural urban migration analysis

    NASA Astrophysics Data System (ADS)

    Silveira, Jaylson J.; Espíndola, Aquino L.; Penna, T. J. P.

    2006-05-01

    In this paper, we analyze the rural-urban migration phenomenon as it is usually observed in economies which are in the early stages of industrialization. The analysis is conducted by means of a statistical mechanics approach which builds a computational agent-based model. Agents are placed on a lattice and the connections among them are described via an Ising-like model. Simulations on this computational model show some emergent properties that are common in developing economies, such as a transitional dynamics characterized by continuous growth of urban population, followed by the equalization of expected wages between rural and urban sectors (Harris-Todaro equilibrium condition), urban concentration and increasing of per capita income.

  19. Agent-based modeling of noncommunicable diseases: a systematic review.

    PubMed

    Nianogo, Roch A; Arah, Onyebuchi A

    2015-03-01

    We reviewed the use of agent-based modeling (ABM), a systems science method, in understanding noncommunicable diseases (NCDs) and their public health risk factors. We systematically reviewed studies in PubMed, ScienceDirect, and Web of Sciences published from January 2003 to July 2014. We retrieved 22 relevant articles; each had an observational or interventional design. Physical activity and diet were the most-studied outcomes. Often, single agent types were modeled, and the environment was usually irrelevant to the studied outcome. Predictive validation and sensitivity analyses were most used to validate models. Although increasingly used to study NCDs, ABM remains underutilized and, where used, is suboptimally reported in public health studies. Its use in studying NCDs will benefit from clarified best practices and improved rigor to establish its usefulness and facilitate replication, interpretation, and application.

  20. Hypercompetitive Environments: An Agent-based model approach

    NASA Astrophysics Data System (ADS)

    Dias, Manuel; Araújo, Tanya

    Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.

  1. Phylogeny of certain biocontrol agents with special reference to nematophagous fungi based on RAPd.

    PubMed

    Jarullah, B M S; Subramanian, R B; Jummanah, M S J

    2005-01-01

    A number of phylogenetic studies have been carried out on biocontrol agents having similar biological control activity. However, no work has been carried out to determine the phylogenetic relationship amongst various groups of biological control agents with varied biocontrol properties. Our aim was to derive a phylogenetic relationship between diverse biocontrol agents belonging to the deuteromycetes and determine its correlation with their spore morphology and their biocontrol activity. RAPD was used to assess genomic variability in fungi used as biological control agents which included ten isolates of nematophagous fungi such as Arthrobotrys sp., Duddingtonia sp., Paecilomyces sp. and Verticillium sp., along with two isolates of fungal biocontrol agents such as Trichoderma sp. and two isolates of entomopathogenic fungi including Beauveria sp. A plant pathogenic fungus, Verticillium alboatrum was also included to increase the diversity of Deuteromycetes used. A similarity matrix was created using Jaccard's similarity coefficient & clustering was done using unweighted pair group arithmetic mean method (UPGMA). The final dendogram was created using a combination of two programs, Freetree and TreeExplorer. The phylogenetic tree constructed from the RAPD data showed marked genetic variability among different strains of the same species. The spore morphologies of all these fungi were also studied. The phylogenetic pattern could be correlated with the conidial and conidiophore morphology, a criterion commonly used for the classification of fungi in general and Deuteromycetes in particular. Interestingly, the inferred phylogeny showed no significant grouping based on either their biological control properties or the trapping structures amongst the nematophagous fungi as reported earlier by other workers. The phylogenetic pattern was also similar to the tree obtained by comparing the 18S rRNA sequences from the database. The result clearly indicates that the classical

  2. Palm-Vein Classification Based on Principal Orientation Features

    PubMed Central

    Zhou, Yujia; Liu, Yaqin; Feng, Qianjin; Yang, Feng; Huang, Jing; Nie, Yixiao

    2014-01-01

    Personal recognition using palm–vein patterns has emerged as a promising alternative for human recognition because of its uniqueness, stability, live body identification, flexibility, and difficulty to cheat. With the expanding application of palm–vein pattern recognition, the corresponding growth of the database has resulted in a long response time. To shorten the response time of identification, this paper proposes a simple and useful classification for palm–vein identification based on principal direction features. In the registration process, the Gaussian-Radon transform is adopted to extract the orientation matrix and then compute the principal direction of a palm–vein image based on the orientation matrix. The database can be classified into six bins based on the value of the principal direction. In the identification process, the principal direction of the test sample is first extracted to ascertain the corresponding bin. One-by-one matching with the training samples is then performed in the bin. To improve recognition efficiency while maintaining better recognition accuracy, two neighborhood bins of the corresponding bin are continuously searched to identify the input palm–vein image. Evaluation experiments are conducted on three different databases, namely, PolyU, CASIA, and the database of this study. Experimental results show that the searching range of one test sample in PolyU, CASIA and our database by the proposed method for palm–vein identification can be reduced to 14.29%, 14.50%, and 14.28%, with retrieval accuracy of 96.67%, 96.00%, and 97.71%, respectively. With 10,000 training samples in the database, the execution time of the identification process by the traditional method is 18.56 s, while that by the proposed approach is 3.16 s. The experimental results confirm that the proposed approach is more efficient than the traditional method, especially for a large database. PMID:25383715

  3. Classification of fusiform neocortical interneurons based on unsupervised clustering

    PubMed Central

    Cauli, Bruno; Porter, James T.; Tsuzuki, Keisuke; Lambolez, Bertrand; Rossier, Jean; Quenet, Brigitte; Audinat, Etienne

    2000-01-01

    A classification of fusiform neocortical interneurons (n = 60) was performed with an unsupervised cluster analysis based on the comparison of multiple electrophysiological and molecular parameters studied by patch-clamp and single-cell multiplex reverse transcription–PCR in rat neocortical acute slices. The multiplex reverse transcription–PCR protocol was designed to detect simultaneously the expression of GAD65, GAD67, calbindin, parvalbumin, calretinin, neuropeptide Y, vasoactive intestinal peptide (VIP), somatostatin (SS), cholecystokinin, α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid, kainate, N-methyl-d-aspartate, and metabotropic glutamate receptor subtypes. Three groups of fusiform interneurons with distinctive features were disclosed by the cluster analysis. The first type of fusiform neuron (n = 12), termed regular spiking nonpyramidal (RSNP)-SS cluster, was characterized by a firing pattern of RSNP cells and by a high occurrence of SS. The second type of fusiform neuron (n = 32), termed RSNP-VIP cluster, predominantly expressed VIP and also showed firing properties of RSNP neurons with accommodation profiles different from those of RSNP-SS cells. Finally, the last type of fusiform neuron (n = 16) contained a majority of irregular spiking-VIPergic neurons. In addition, the analysis of glutamate receptors revealed cell-type-specific expression profiles. This study shows that combinations of multiple independent criteria define distinct neocortical populations of interneurons potentially involved in specific functions. PMID:10823957

  4. China's classification-based forest management: procedures, problems, and prospects.

    PubMed

    Dai, Limin; Zhao, Fuqiang; Shao, Guofan; Zhou, Li; Tang, Lina

    2009-06-01

    China's new Classification-Based Forest Management (CFM) is a two-class system, including Commodity Forest (CoF) and Ecological Welfare Forest (EWF) lands, so named according to differences in their distinct functions and services. The purposes of CFM are to improve forestry economic systems, strengthen resource management in a market economy, ease the conflicts between wood demands and public welfare, and meet the diversified needs for forest services in China. The formative process of China's CFM has involved a series of trials and revisions. China's central government accelerated the reform of CFM in the year 2000 and completed the final version in 2003. CFM was implemented at the provincial level with the aid of subsidies from the central government. About a quarter of the forestland in China was approved as National EWF lands by the State Forestry Administration in 2006 and 2007. Logging is prohibited on National EWF lands, and their landowners or managers receive subsidies of about 70 RMB (US$10) per hectare from the central government. CFM represents a new forestry strategy in China and its implementation inevitably faces challenges in promoting the understanding of forest ecological services, generalizing nationwide criteria for identifying EWF and CoF lands, setting up forest-specific compensation mechanisms for ecological benefits, enhancing the knowledge of administrators and the general public about CFM, and sustaining EWF lands under China's current forestland tenure system. CFM does, however, offer a viable pathway toward sustainable forest management in China.

  5. A tentative classification of paleoweathering formations based on geomorphological criteria

    NASA Astrophysics Data System (ADS)

    Battiau-Queney, Yvonne

    1996-05-01

    A geomorphological classification is proposed that emphasizes the usefulness of paleoweathering records in any reconstruction of past landscapes. Four main paleoweathering records are recognized: 1. Paleoweathering formations buried beneath a sedimentary or volcanic cover. Most of them are saprolites, sometimes with preserved overlying soils. Ages range from Archean to late Cenozoic times; 2. Paleoweathering formations trapped in karst: some of them have buried pre-existent karst landforms, others have developed simultaneously with the subjacent karst; 3. Relict paleoweathering formations: although inherited, they belong to the present landscape. Some of them are indurated (duricrusts, silcretes, ferricretes,…); others are not and owe their preservation to a stable morphotectonic environment; 4. Polyphased weathering mantles: weathering has taken place in changing geochemical conditions. After examples of each type are provided, the paper considers the relations between chemical weathering and landform development. The climatic significance of paleoweathering formations is discussed. Some remote morphogenic systems have no present equivalent. It is doubtful that chemical weathering alone might lead to widespread planation surfaces. Moreover, classical theories based on sea-level and rivers as the main factors of erosion are not really adequate to explain the observed landscapes.

  6. China's Classification-Based Forest Management: Procedures, Problems, and Prospects

    NASA Astrophysics Data System (ADS)

    Dai, Limin; Zhao, Fuqiang; Shao, Guofan; Zhou, Li; Tang, Lina

    2009-06-01

    China’s new Classification-Based Forest Management (CFM) is a two-class system, including Commodity Forest (CoF) and Ecological Welfare Forest (EWF) lands, so named according to differences in their distinct functions and services. The purposes of CFM are to improve forestry economic systems, strengthen resource management in a market economy, ease the conflicts between wood demands and public welfare, and meet the diversified needs for forest services in China. The formative process of China’s CFM has involved a series of trials and revisions. China’s central government accelerated the reform of CFM in the year 2000 and completed the final version in 2003. CFM was implemented at the provincial level with the aid of subsidies from the central government. About a quarter of the forestland in China was approved as National EWF lands by the State Forestry Administration in 2006 and 2007. Logging is prohibited on National EWF lands, and their landowners or managers receive subsidies of about 70 RMB (US10) per hectare from the central government. CFM represents a new forestry strategy in China and its implementation inevitably faces challenges in promoting the understanding of forest ecological services, generalizing nationwide criteria for identifying EWF and CoF lands, setting up forest-specific compensation mechanisms for ecological benefits, enhancing the knowledge of administrators and the general public about CFM, and sustaining EWF lands under China’s current forestland tenure system. CFM does, however, offer a viable pathway toward sustainable forest management in China.

  7. Event-Based User Classification in Weibo Media

    PubMed Central

    Wang, Wendong; Cheng, Shiduan; Que, Xirong

    2014-01-01

    Weibo media, known as the real-time microblogging services, has attracted massive attention and support from social network users. Weibo platform offers an opportunity for people to access information and changes the way people acquire and disseminate information significantly. Meanwhile, it enables people to respond to the social events in a more convenient way. Much of the information in Weibo media is related to some events. Users who post different contents, and exert different behavior or attitude may lead to different contribution to the specific event. Therefore, classifying the large amount of uncategorized social circles generated in Weibo media automatically from the perspective of events has been a promising task. Under this circumstance, in order to effectively organize and manage the huge amounts of users, thereby further managing their contents, we address the task of user classification in a more granular, event-based approach in this paper. By analyzing real data collected from Sina Weibo, we investigate the Weibo properties and utilize both content information and social network information to classify the numerous users into four primary groups: celebrities, organizations/media accounts, grassroots stars, and ordinary individuals. The experiments results show that our method identifies the user categories accurately. PMID:25133235

  8. Automatic Mapping of Martian Landforms Using Segmentation-based Classification

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Stepinski, T. F.; Vilalta, R.

    2007-03-01

    We use terrain segmentation and classification techniques to automatically map landforms on Mars. The method is applied to six sites to obtain geomorphic maps geared toward rapid characterization of impact craters.

  9. Classification of CT brain images based on deep learning networks.

    PubMed

    Gao, Xiaohong W; Hui, Rui; Tian, Zengmin

    2017-01-01

    While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information

  10. Basic Hand Gestures Classification Based on Surface Electromyography

    PubMed Central

    Palkowski, Aleksander; Redlarski, Grzegorz

    2016-01-01

    This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method. PMID:27298630

  11. Renoprotection and the Bardoxolone Methyl Story – Is This the Right Way Forward? A Novel View of Renoprotection in CKD Trials: A New Classification Scheme for Renoprotective Agents

    PubMed Central

    Onuigbo, Macaulay

    2013-01-01

    In the June 2011 issue of the New England Journal of Medicine, the BEAM (Bardoxolone Methyl Treatment: Renal Function in CKD/Type 2 Diabetes) trial investigators rekindled new interest and also some controversy regarding the concept of renoprotection and the role of renoprotective agents, when they reported significant increases in the mean estimated glomerular filtration rate (eGFR) in diabetic chronic kidney disease (CKD) patients with an eGFR of 20-45 ml/min/1.73 m2 of body surface area at enrollment who received the trial drug bardoxolone methyl versus placebo. Unfortunately, subsequent phase IIIb trials failed to show that the drug is a safe alternative renoprotective agent. Current renoprotection paradigms depend wholly and entirely on angiotensin blockade; however, these agents [angiotensin converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs)] have proved to be imperfect renoprotective agents. In this review, we examine the mechanistic limitations of the various previous randomized controlled trials on CKD renoprotection, including the paucity of veritable, elaborate and systematic assessment methods for the documentation and reporting of individual patient-level, drug-related adverse events. We review the evidence base for the presence of putative, multiple independent and unrelated pathogenetic mechanisms that drive (diabetic and non-diabetic) CKD progression. Furthermore, we examine the validity, or lack thereof, of the hyped notion that the blockade of a single molecule (angiotensin II), which can only antagonize the angiotensin cascade, would veritably successfully, consistently and unfailingly deliver adequate and qualitative renoprotection results in (diabetic and non-diabetic) CKD patients. We clearly posit that there is this overarching impetus to arrive at the inference that multiple, disparately diverse and independent pathways, including any veritable combination of the mechanisms that we examine in this review, and many

  12. Remote sensing image classification based on support vector machine with the multi-scale segmentation

    NASA Astrophysics Data System (ADS)

    Bao, Wenxing; Feng, Wei; Ma, Ruishi

    2015-12-01

    In this paper, we proposed a new classification method based on support vector machine (SVM) combined with multi-scale segmentation. The proposed method obtains satisfactory segmentation results which are based on both the spectral characteristics and the shape parameters of segments. SVM method is used to label all these regions after multiscale segmentation. It can effectively improve the classification results. Firstly, the homogeneity of the object spectra, texture and shape are calculated from the input image. Secondly, multi-scale segmentation method is applied to the RS image. Combining graph theory based optimization with the multi-scale image segmentations, the resulting segments are merged regarding the heterogeneity criteria. Finally, based on the segmentation result, the model of SVM combined with spectrum texture classification is constructed and applied. The results show that the proposed method can effectively improve the remote sensing image classification accuracy and classification efficiency.

  13. Hierarchical structure for audio-video based semantic classification of sports video sequences

    NASA Astrophysics Data System (ADS)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  14. Intelligence system based classification approach for medical disease diagnosis

    NASA Astrophysics Data System (ADS)

    Sagir, Abdu Masanawa; Sathasivam, Saratha

    2017-08-01

    The prediction of breast cancer in women who have no signs or symptoms of the disease as well as survivability after undergone certain surgery has been a challenging problem for medical researchers. The decision about presence or absence of diseases depends on the physician's intuition, experience and skill for comparing current indicators with previous one than on knowledge rich data hidden in a database. This measure is a very crucial and challenging task. The goal is to predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system. A framework describes methodology for designing and evaluation of classification performances of two discrete ANFIS systems of hybrid learning algorithms least square estimates with Modified Levenberg-Marquardt and Gradient descent algorithms that can be used by physicians to accelerate diagnosis process. The proposed method's performance was evaluated based on training and test datasets with mammographic mass and Haberman's survival Datasets obtained from benchmarked datasets of University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity is examined. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.

  15. Perspective: A Dynamics-Based Classification of Ventricular Arrhythmias

    PubMed Central

    Weiss, James N.; Garfinkel, Alan; Karagueuzian, Hrayr S.; Nguyen, Thao P.; Olcese, Riccardo; Chen, Peng-Sheng; Qu, Zhilin

    2015-01-01

    Despite key advances in the clinical management of life-threatening ventricular arrhythmias, culminating with the development of implantable cardioverter-defibrillators and catheter ablation techniques, pharmacologic/biologic therapeutics have lagged behind. The fundamental issue is that biological targets are molecular factors. Diseases, however, represent emergent properties at the scale of the organism that result from dynamic interactions between multiple constantly changing molecular factors. For a pharmacologic/biologic therapy to be effective, it must target the dynamic processes that underlie the disease. Here we propose a classification of ventricular arrhythmias that is based on our current understanding of the dynamics occurring at the subcellular, cellular, tissue and organism scales, which cause arrhythmias by simultaneously generating arrhythmia triggers and exacerbating tissue vulnerability. The goal is to create a framework that systematically links these key dynamic factors together with fixed factors (structural and electrophysiological heterogeneity) synergistically promoting electrical dispersion and increased arrhythmia risk to molecular factors that can serve as biological targets. We classify ventricular arrhythmias into three primary dynamic categories related generally to unstable Ca cycling, reduced repolarization, and excess repolarization, respectively. The clinical syndromes, arrhythmia mechanisms, dynamic factors and what is known about their molecular counterparts are discussed. Based on this framework, we propose a computational-experimental strategy for exploring the links between molecular factors, fixed factors and dynamic factors that underlie life-threatening ventricular arrhythmias. The ultimate objective is to facilitate drug development by creating an in silico platform to evaluate and predict comprehensively how molecular interventions affect not only a single targeted arrhythmia, but all primary arrhythmia dynamics

  16. Perspective: a dynamics-based classification of ventricular arrhythmias.

    PubMed

    Weiss, James N; Garfinkel, Alan; Karagueuzian, Hrayr S; Nguyen, Thao P; Olcese, Riccardo; Chen, Peng-Sheng; Qu, Zhilin

    2015-05-01

    Despite key advances in the clinical management of life-threatening ventricular arrhythmias, culminating with the development of implantable cardioverter-defibrillators and catheter ablation techniques, pharmacologic/biologic therapeutics have lagged behind. The fundamental issue is that biological targets are molecular factors. Diseases, however, represent emergent properties at the scale of the organism that result from dynamic interactions between multiple constantly changing molecular factors. For a pharmacologic/biologic therapy to be effective, it must target the dynamic processes that underlie the disease. Here we propose a classification of ventricular arrhythmias that is based on our current understanding of the dynamics occurring at the subcellular, cellular, tissue and organism scales, which cause arrhythmias by simultaneously generating arrhythmia triggers and exacerbating tissue vulnerability. The goal is to create a framework that systematically links these key dynamic factors together with fixed factors (structural and electrophysiological heterogeneity) synergistically promoting electrical dispersion and increased arrhythmia risk to molecular factors that can serve as biological targets. We classify ventricular arrhythmias into three primary dynamic categories related generally to unstable Ca cycling, reduced repolarization, and excess repolarization, respectively. The clinical syndromes, arrhythmia mechanisms, dynamic factors and what is known about their molecular counterparts are discussed. Based on this framework, we propose a computational-experimental strategy for exploring the links between molecular factors, fixed factors and dynamic factors that underlie life-threatening ventricular arrhythmias. The ultimate objective is to facilitate drug development by creating an in silico platform to evaluate and predict comprehensively how molecular interventions affect not only a single targeted arrhythmia, but all primary arrhythmia dynamics

  17. Protection of autonomous microgrids using agent-based distributed communication

    SciTech Connect

    Cintuglu, Mehmet H.; Ma, Tan; Mohammed, Osama A.

    2016-04-06

    This study presents a real-time implementation of autonomous microgrid protection using agent-based distributed communication. Protection of an autonomous microgrid requires special considerations compared to large scale distribution net-works due to the presence of power converters and relatively low inertia. In this work, we introduce a practical overcurrent and a frequency selectivity method to overcome conventional limitations. The proposed overcurrent scheme defines a selectivity mechanism considering the remedial action scheme (RAS) of the microgrid after a fault instant based on feeder characteristics and the location of the intelligent electronic devices (IEDs). A synchrophasor-based online frequency selectivity approach is proposed to avoid pulse loading effects in low inertia microgrids. Experimental results are presented for verification of the pro-posed schemes using a laboratory based microgrid. The setup was composed of actual generation units and IEDs using IEC 61850 protocol. The experimental results were in excellent agreement with the proposed protection scheme.

  18. Improving Agent Based Models and Validation through Data Fusion

    PubMed Central

    Laskowski, Marek; Demianyk, Bryan C.P.; Friesen, Marcia R.; McLeod, Robert D.; Mukhi, Shamir N.

    2011-01-01

    This work is contextualized in research in modeling and simulation of infection spread within a community or population, with the objective to provide a public health and policy tool in assessing the dynamics of infection spread and the qualitative impacts of public health interventions. This work uses the integration of real data sources into an Agent Based Model (ABM) to simulate respiratory infection spread within a small municipality. Novelty is derived in that the data sources are not necessarily obvious within ABM infection spread models. The ABM is a spatial-temporal model inclusive of behavioral and interaction patterns between individual agents on a real topography. The agent behaviours (movements and interactions) are fed by census / demographic data, integrated with real data from a telecommunication service provider (cellular records) and person-person contact data obtained via a custom 3G Smartphone application that logs Bluetooth connectivity between devices. Each source provides data of varying type and granularity, thereby enhancing the robustness of the model. The work demonstrates opportunities in data mining and fusion that can be used by policy and decision makers. The data become real-world inputs into individual SIR disease spread models and variants, thereby building credible and non-intrusive models to qualitatively simulate and assess public health interventions at the population level. PMID:23569606

  19. Agent-based modelling of consumer energy choices

    NASA Astrophysics Data System (ADS)

    Rai, Varun; Henry, Adam Douglas

    2016-06-01

    Strategies to mitigate global climate change should be grounded in a rigorous understanding of energy systems, particularly the factors that drive energy demand. Agent-based modelling (ABM) is a powerful tool for representing the complexities of energy demand, such as social interactions and spatial constraints. Unlike other approaches for modelling energy demand, ABM is not limited to studying perfectly rational agents or to abstracting micro details into system-level equations. Instead, ABM provides the ability to represent behaviours of energy consumers -- such as individual households -- using a range of theories, and to examine how the interaction of heterogeneous agents at the micro-level produces macro outcomes of importance to the global climate, such as the adoption of low-carbon behaviours and technologies over space and time. We provide an overview of ABM work in the area of consumer energy choices, with a focus on identifying specific ways in which ABM can improve understanding of both fundamental scientific and applied aspects of the demand side of energy to aid the design of better policies and programmes. Future research needs for improving the practice of ABM to better understand energy demand are also discussed.

  20. Speech/Music Classification Enhancement for 3GPP2 SMV Codec Based on Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Kyun; Chang, Joon-Hyuk

    In this letter, we propose a novel approach to speech/music classification based on the support vector machine (SVM) to improve the performance of the 3GPP2 selectable mode vocoder (SMV) codec. We first analyze the features and the classification method used in real time speech/music classification algorithm in SMV, and then apply the SVM for enhanced speech/music classification. For evaluation of performance, we compare the proposed algorithm and the traditional algorithm of the SMV. The performance of the proposed system is evaluated under the various environments and shows better performance compared to the original method in the SMV.

  1. [Automatic classification method of star spectrum data based on constrained concept lattice].

    PubMed

    Zhang, Ji-Fu; Ma, Yang

    2010-02-01

    Concept lattice is an effective formal tool for data analysis and knowledge extraction. Constrained concept lattice, with the characteristics of higher constructing efficiency, practicability and pertinency, is a new concept lattice structure. For the automatic classification task of star spectrum, a classification rule mining method based on constrained concept lattice is presented by using the concepts of partition and extant supports. In the end, the experimental results validate the higher classification efficiency and correctness of the method by taking the star spectrum data as the formal context, so that an effective way is provided for the automatic classification of massive star spectrum.

  2. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  3. Classification of PolSAR image based on quotient space theory

    NASA Astrophysics Data System (ADS)

    An, Zhihui; Yu, Jie; Liu, Xiaomeng; Liu, Limin; Jiao, Shuai; Zhu, Teng; Wang, Shaohua

    2015-12-01

    In order to improve the classification accuracy, quotient space theory was applied in the classification of polarimetric SAR (PolSAR) image. Firstly, Yamaguchi decomposition method is adopted, which can get the polarimetric characteristic of the image. At the same time, Gray level Co-occurrence Matrix (GLCM) and Gabor wavelet are used to get texture feature, respectively. Secondly, combined with texture feature and polarimetric characteristic, Support Vector Machine (SVM) classifier is used for initial classification to establish different granularity spaces. Finally, according to the quotient space granularity synthetic theory, we merge and reason the different quotient spaces to get the comprehensive classification result. Method proposed in this paper is tested with L-band AIRSAR of San Francisco bay. The result shows that the comprehensive classification result based on the theory of quotient space is superior to the classification result of single granularity space.

  4. Agent-Based Computing in Distributed Adversarial Planning

    DTIC Science & Technology

    2010-08-09

    agents, P3 represents games with 3 agents; value of BF represents the branching factors for the agents in fixed order (each digit for one agent...and M. Wooldridge. Cooperation, knowledge, and time: Alternating-time temporal epistemic logic and its applications. Studia Logica , 75(1):125–157

  5. Land cover classification using random forest with genetic algorithm-based parameter optimization

    NASA Astrophysics Data System (ADS)

    Ming, Dongping; Zhou, Tianning; Wang, Min; Tan, Tian

    2016-07-01

    Land cover classification based on remote sensing imagery is an important means to monitor, evaluate, and manage land resources. However, it requires robust classification methods that allow accurate mapping of complex land cover categories. Random forest (RF) is a powerful machine-learning classifier that can be used in land remote sensing. However, two important parameters of RF classification, namely, the number of trees and the number of variables tried at each split, affect classification accuracy. Thus, optimal parameter selection is an inevitable problem in RF-based image classification. This study uses the genetic algorithm (GA) to optimize the two parameters of RF to produce optimal land cover classification accuracy. HJ-1B CCD2 image data are used to classify six different land cover categories in Changping, Beijing, China. Experimental results show that GA-RF can avoid arbitrariness in the selection of parameters. The experiments also compare land cover classification results by using GA-RF method, traditional RF method (with default parameters), and support vector machine method. When the GA-RF method is used, classification accuracies, respectively, improved by 1.02% and 6.64%. The comparison results show that GA-RF is a feasible solution for land cover classification without compromising accuracy or incurring excessive time.

  6. A Classification Method Based on Principal Components of SELDI Spectra to Diagnose of Lung Adenocarcinoma

    PubMed Central

    Xiong, Li-Wen; Wang, Yi; Geng, Jun-Feng; Feng, Jiu-Xian; Han, Bao-Hui; Bao, Guo-Liang; Yang, Yu; Wang, Xiaotian; Jin, Li; Guo, Wensheng; Wang, Jiu-Cun

    2012-01-01

    Purpose Lung cancer is the leading cause of cancer death worldwide, but techniques for effective early diagnosis are still lacking. Proteomics technology has been applied extensively to the study of the proteins involved in carcinogenesis. In this paper, a classification method was developed based on principal components of surface-enhanced laser desorption/ionization (SELDI) spectral data. This method was applied to SELDI spectral data from 71 lung adenocarcinoma patients and 24 healthy individuals. Unlike other peak-selection-based methods, this method takes each spectrum as a unity. The aim of this paper was to demonstrate that this unity-based classification method is more robust and powerful as a method of diagnosis than peak-selection-based methods. Results The results showed that this classification method, which is based on principal components, has outstanding performance with respect to distinguishing lung adenocarcinoma patients from normal individuals. Through leaving-one-out, 19-fold, 5-fold and 2-fold cross-validation studies, we found that this classification method based on principal components completely outperforms peak-selection-based methods, such as decision tree, classification and regression tree, support vector machine, and linear discriminant analysis. Conclusions and Clinical Relevance The classification method based on principal components of SELDI spectral data is a robust and powerful means of diagnosing lung adenocarcinoma. We assert that the high efficiency of this classification method renders it feasible for large-scale clinical use. PMID:22461913

  7. Ontology-based, multi-agent support of production management

    NASA Astrophysics Data System (ADS)

    Meridou, Despina T.; Inden, Udo; Rückemann, Claus-Peter; Patrikakis, Charalampos Z.; Kaklamani, Dimitra-Theodora I.; Venieris, Iakovos S.

    2016-06-01

    Over the recent years, the reported incidents on failed aircraft ramp-ups or the delayed production in small-lots have increased substantially. In this paper, we present a production management platform that combines agent-based techniques with the Service Oriented Architecture paradigm. This platform takes advantage of the functionality offered by the semantic web language OWL, which allows the users and services of the platform to speak a common language and, at the same time, facilitates risk management and decision making.

  8. Agent-based model of macrophage action on endocrine pancreas.

    PubMed

    Martínez, Ignacio V; Gómez, Enrique J; Hernando, M Elena; Villares, Ricardo; Mellado, Mario

    2012-01-01

    This paper proposes an agent-based model of the action of macrophages on the beta cells of the endocrine pancreas. The aim of this model is to simulate the processes of beta cell proliferation and apoptosis and also the process of phagocytosis of cell debris by macrophages, all of which are related to the onset of the autoimmune response in type 1 diabetes. We have used data from the scientific literature to design the model. The results show that the model obtains good approximations to real processes and could be used to shed light on some open questions concerning such processes.

  9. Objected-oriented remote sensing image classification method based on geographic ontology model

    NASA Astrophysics Data System (ADS)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application

  10. Image-classification-based global dimming algorithm for LED backlights in LCDs

    NASA Astrophysics Data System (ADS)

    Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv

    2015-07-01

    Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.

  11. Observations Regarding a Revised Standard Occupational Classification System Using a Skills Based Concept.

    ERIC Educational Resources Information Center

    McCage, Ronald D.; Olson, Chris M.

    A study focused on defining what is needed to build an occupational classification system using a skills-based concept. A thorough analysis was conducted of all existing classification systems and the new Dictionary of Occupational Titles (DOT) content model so recommendations made regarding the revisions of the Standard Occupational…

  12. Dihedral-based segment identification and classification of biopolymers II: polynucleotides.

    PubMed

    Nagy, Gabor; Oostenbrink, Chris

    2014-01-27

    In an accompanying paper (Nagy, G.; Oostenbrink, C. Dihedral-based segment identification and classification of biopolymers I: Proteins. J. Chem. Inf. Model. 2013, DOI: 10.1021/ci400541d), we introduce a new algorithm for structure classification of biopolymeric structures based on main-chain dihedral angles. The DISICL algorithm (short for DIhedral-based Segment Identification and CLassification) classifies segments of structures containing two central residues. Here, we introduce the DISICL library for polynucleotides, which is based on the dihedral angles ε, ζ, and χ for the two central residues of a three-nucleotide segment of a single strand. Seventeen distinct structural classes are defined for nucleotide structures, some of which--to our knowledge--were not described previously in other structure classification algorithms. In particular, DISICL also classifies noncanonical single-stranded structural elements. DISICL is applied to databases of DNA and RNA structures containing 80,000 and 180,000 segments, respectively. The classifications according to DISICL are compared to those of another popular classification scheme in terms of the amount of classified nucleotides, average occurrence and length of structural elements, and pairwise matches of the classifications. While the detailed classification of DISICL adds sensitivity to a structure analysis, it can be readily reduced to eight simplified classes providing a more general overview of the secondary structure in polynucleotides.

  13. Dihedral-Based Segment Identification and Classification of Biopolymers II: Polynucleotides

    PubMed Central

    2013-01-01

    In an accompanying paper (Nagy, G.; Oostenbrink, C. Dihedral-based segment identification and classification of biopolymers I: Proteins. J. Chem. Inf. Model. 2013, DOI: 10.1021/ci400541d), we introduce a new algorithm for structure classification of biopolymeric structures based on main-chain dihedral angles. The DISICL algorithm (short for DIhedral-based Segment Identification and CLassification) classifies segments of structures containing two central residues. Here, we introduce the DISICL library for polynucleotides, which is based on the dihedral angles ε, ζ, and χ for the two central residues of a three-nucleotide segment of a single strand. Seventeen distinct structural classes are defined for nucleotide structures, some of which—to our knowledge—were not described previously in other structure classification algorithms. In particular, DISICL also classifies noncanonical single-stranded structural elements. DISICL is applied to databases of DNA and RNA structures containing 80,000 and 180,000 segments, respectively. The classifications according to DISICL are compared to those of another popular classification scheme in terms of the amount of classified nucleotides, average occurrence and length of structural elements, and pairwise matches of the classifications. While the detailed classification of DISICL adds sensitivity to a structure analysis, it can be readily reduced to eight simplified classes providing a more general overview of the secondary structure in polynucleotides. PMID:24364355

  14. Using Discrete Loss Functions and Weighted Kappa for Classification: An Illustration Based on Bayesian Network Analysis

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Lenaburg, Lubella

    2009-01-01

    In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…

  15. Classification of Ontario watersheds based on physical attributes and streamflow series

    NASA Astrophysics Data System (ADS)

    Razavi, Tara; Coulibaly, Paulin

    2013-06-01

    Nonlinear cluster analysis techniques including Self Organizing Maps (SOMs), standard Non-Linear Principal Component Analysis (NLPCA) and Compact Non-Linear Principal Component Analysis (Compact-NLPCA) are investigated for the identification of hydrologically homogeneous clusters of watersheds across Ontario, Canada. The results of classification based on catchment attributes and streamflow series of Ontario watersheds are compared to those of two benchmarks: the standard Principal Component Analysis (PCA) and K-means classification based on recently proposed runoff signatures. The latter classified the 90 watersheds into four homogeneous groups used as a reference classification to evaluate the performance of the nonlinear clustering techniques. The similarity index between the first largest group of the reference classification and the one from the NLPCA based on streamflow, is about 0.58. For the Compact-NLPCA the similarity is about 0.56 and for the SOM it is about 0.52. Furthermore, those results remain slightly the same when the watersheds are classified based on watershed attributes - suggesting that the nonlinear classification methods can be robust tools for the classification of ungauged watersheds prior to regionalization. Distinct patterns of flow regime characteristics and specific dominant hydrological attributes are identified in the clusters obtained from the nonlinear classification techniques - indicating that the classifications are sound from the hydrological point of view.

  16. HYDROLOGIC REGIME CLASSIFICATION OF LAKE MICHIGAN COASTAL RIVERINE WETLANDS BASED ON WATERSHED CHARACTERISTICS

    EPA Science Inventory

    Classification of wetlands systems is needed not only to establish reference condition, but also to predict the relative sensitivity of different wetland classes. In the current study, we examined the potential for ecoregion- versus flow-based classification strategies to explain...

  17. HYDROLOGIC REGIME CLASSIFICATION OF LAKE MICHIGAN COASTAL RIVERINE WETLANDS BASED ON WATERSHED CHARACTERISTICS

    EPA Science Inventory

    Classification of wetlands systems is needed not only to establish reference condition, but also to predict the relative sensitivity of different wetland classes. In the current study, we examined the potential for ecoregion- versus flow-based classification strategies to explain...

  18. Classification of recharge regimes based on measures of hydrologic similarity

    NASA Astrophysics Data System (ADS)

    Sivapalan, Murugesu; Harman, Ciaran J.

    2010-05-01

    Groundwater recharge is usually estimated with the use of detailed numerical models of the vadose zone, where it is treated as a steady state process or is analyzed over short time periods (e.g., after single rainfall events). In reality, in natural settings groundwater recharge needs to be seen as the residual effect of the competition between gravitation drainage, capillary action of the soils and evaporation and plant water uptake. The competition is mediated by the nature of the soils, biological activity of living organisms, including vegetation and its adaptive behavior. Due to intermittency of the precipitation driver and the nonlinearity of soil mediated processes, recharge behavior can exhibit complex, nonlinear and threshold like behavior. In many instances it may reflect memory of previous events going backs weeks and even months. What is the role of climate, soils and vegetation in governing such behavior? In this paper we will adopt a similarity framework to assess recharge behavior in different climate-soil settings, in order to classify a range of recharge regimes, and the climate and soil controls that lead to such organization. A simple "multiple wetting front" model of unsaturated zone fluxes is used to carry out long term simulations of recharge, driven by artificial rainfall time series that include multi-scale variability ranging from within-storm patterns, seasonality, and inter-annual and inter-decadal variations. The results suggest that the classification system based on the use of a ratio of time scales that characterize the propagation of variability through the vadose zone, and the competition between the different forces that act on the water, including vegetation functioning. The analysis can be extended to estimate the residence time and age of the water that recharges, factors that are important to quantify the chemical composition of the water

  19. Nanocellulose-based composites and bioactive agents for food packaging.

    PubMed

    Khan, Avik; Huq, Tanzina; Khan, Ruhul A; Riedl, Bernard; Lacroix, Monique

    2014-01-01

    Global environmental concern, regarding the use of petroleum-based packaging materials, is encouraging researchers and industries in the search for packaging materials from natural biopolymers. Bioactive packaging is gaining more and more interest not only due to its environment friendly nature but also due to its potential to improve food quality and safety during packaging. Some of the shortcomings of biopolymers, such as weak mechanical and barrier properties can be significantly enhanced by the use of nanomaterials such as nanocellulose (NC). The use of NC can extend the food shelf life and can also improve the food quality as they can serve as carriers of some active substances, such as antioxidants and antimicrobials. The NC fiber-based composites have great potential in the preparation of cheap, lightweight, and very strong nanocomposites for food packaging. This review highlights the potential use and application of NC fiber-based nanocomposites and also the incorporation of bioactive agents in food packaging.

  20. Agent-based intelligent medical diagnosis system for patients.

    PubMed

    Zhang, Yingfeng; Liu, Sichao; Zhu, Zhenfei; Si, Shubin

    2015-01-01

    According to the analysis of the challenges faced by the current public health circumstances such as the sharp increase in elderly patients, limited medical personnel, resources and technology, the agent-based intelligent medical diagnosis system for patients (AIMDS) is proposed in this research. Based on advanced sensing technology and professional medical knowledge, the AIMDS can output the appropriate medical prescriptions and food prohibition when the physical signs and symptoms of the patient are inputted. Three core modules are designed include sensing module, intuition-based fuzzy set theory/medical diagnosis module, and medical knowledge module. The result shows that the optimized prescription can reach the desired level, with great curative effect for patient disease, through a case study simulation. The presented AIMDS can integrate sensor technique and intelligent medical diagnosis methods to make an accurate diagnosis, resulting in three-type of optimized descriptions for patient selection.

  1. Accelerometry-based classification of human activities using Markov modeling.

    PubMed

    Mannini, Andrea; Sabatini, Angelo Maria

    2011-01-01

    Accelerometers are a popular choice as body-motion sensors: the reason is partly in their capability of extracting information that is useful for automatically inferring the physical activity in which the human subject is involved, beside their role in feeding biomechanical parameters estimators. Automatic classification of human physical activities is highly attractive for pervasive computing systems, whereas contextual awareness may ease the human-machine interaction, and in biomedicine, whereas wearable sensor systems are proposed for long-term monitoring. This paper is concerned with the machine learning algorithms needed to perform the classification task. Hidden Markov Model (HMM) classifiers are studied by contrasting them with Gaussian Mixture Model (GMM) classifiers. HMMs incorporate the statistical information available on movement dynamics into the classification process, without discarding the time history of previous outcomes as GMMs do. An example of the benefits of the obtained statistical leverage is illustrated and discussed by analyzing two datasets of accelerometer time series.

  2. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMED IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...

  3. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMES IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...

  4. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMES IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...

  5. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMED IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...

  6. Support vector machine classification trees based on fuzzy entropy of classification.

    PubMed

    de Boves Harrington, Peter

    2017-02-15

    The support vector machine (SVM) is a powerful classifier that has recently been implemented in a classification tree (SVMTreeG). This classifier partitioned the data by finding gaps in the data space. For large and complex datasets, there may be no gaps in the data space confounding this type of classifier. A novel algorithm was devised that uses fuzzy entropy to find optimal partitions for situations when clusters of data are overlapped in the data space. Also, a kernel version of the fuzzy entropy algorithm was devised. A fast support vector machine implementation is used that has no cost C or slack variables to optimize. Statistical comparisons using bootstrapped Latin partitions among the tree classifiers were made using a synthetic XOR data set and validated with ten prediction sets comprised of 50,000 objects and a data set of NMR spectra obtained from 12 tea sample extracts.

  7. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  8. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  9. A novel alignment repulsion algorithm for flocking of multi-agent systems based on the number of neighbours per agent

    NASA Astrophysics Data System (ADS)

    Kahani, R.; Sedigh, A. K.; Mahjani, M. Gh.

    2015-12-01

    In this paper, an energy-based control methodology is proposed to satisfy the Reynolds three rules in a flock of multiple agents. First, a control law is provided that is directly derived from the passivity theorem. In the next step, the Number of Neighbours Alignment/Repulsion algorithm is introduced for a flock of agents which loses the cohesion ability and uniformly joint connectivity condition. With this method, each agent tries to follow the agents which escape its neighbourhood by considering the velocity of escape time and number of neighbours. It is mathematically proved that the motion of multiple agents converges to a rigid and uncrowded flock if the group is jointly connected just for an instant. Moreover, the conditions for collision avoidance are guaranteed during the entire process. Finally, simulation results are presented to show the effectiveness of the proposed methodology.

  10. A method for cloud detection and opacity classification based on ground based sky imagery

    NASA Astrophysics Data System (ADS)

    Ghonima, M. S.; Urquhart, B.; Chow, C. W.; Shields, J. E.; Cazorla, A.; Kleissl, J.

    2012-11-01

    Digital images of the sky obtained using a total sky imager (TSI) are classified pixel by pixel into clear sky, optically thin and optically thick clouds. A new classification algorithm was developed that compares the pixel red-blue ratio (RBR) to the RBR of a clear sky library (CSL) generated from images captured on clear days. The difference, rather than the ratio, between pixel RBR and CSL RBR resulted in more accurate cloud classification. High correlation between TSI image RBR and aerosol optical depth (AOD) measured by an AERONET photometer was observed and motivated the addition of a haze correction factor (HCF) to the classification model to account for variations in AOD. Thresholds for clear and thick clouds were chosen based on a training image set and validated with set of manually annotated images. Misclassifications of clear and thick clouds into the opposite category were less than 1%. Thin clouds were classified with an accuracy of 60%. Accurate cloud detection and opacity classification techniques will improve the accuracy of short-term solar power forecasting.

  11. A method for cloud detection and opacity classification based on ground based sky imagery

    NASA Astrophysics Data System (ADS)

    Ghonima, M. S.; Urquhart, B.; Chow, C. W.; Shields, J. E.; Cazorla, A.; Kleissl, J.

    2012-07-01

    Digital images of the sky obtained using a total sky imager (TSI) are classified pixel by pixel into clear sky, optically thin and optically thick clouds. A new classification algorithm was developed that compares the pixel red-blue ratio (RBR) to the RBR of a clear sky library (CSL) generated from images captured on clear days. The difference, rather than the ratio, between pixel RBR and CSL RBR resulted in more accurate cloud classification. High correlation between TSI image RBR and aerosol optical depth (AOD) measured by an AERONET photometer was observed and motivated the addition of a haze correction factor (HCF) to the classification model to account for variations in AOD. Thresholds for clear and thick clouds were chosen based on a training image set and validated with set of manually annotated images. Misclassifications of clear and thick clouds into the opposite category were less than 1%. Thin clouds were classified with an accuracy of 60%. Accurate cloud detection and opacity classification techniques will improve the accuracy of short-term solar power forecasting.

  12. [Spectra Classification Based on Local Mean-Based K-Nearest Centroid Neighbor Method].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Wang, Zhi-heng; Wei, Peng; Luo, A-li; Zhao, Yong-heng

    2015-04-01

    In the present paper, a local mean-based K-nearest centroid neighbor (LMKNCN) technique is used for the classification of stars, galaxies and quasars (QSOS). The main idea of LMKNCN is that it depends on the principle of the nearest centroid neighborhood(NCN), and selects K centroid neighbors of each class as training samples and then classifies a query pattern into the class with the distance of the local centroid mean vector to the samples . In this paper, KNN, KNCN and LMKNCN were experimentally compared with these three different kinds of spectra data which are from the United States SDSS-DR8. Among these three methods, the rate of correct classification of the LMKNCN algorithm is higher than the other two algorithms or comparable and the average rate of correct classification is higher than the other two algorithms, especially for the identification of quasars. Experiment shows that the results in this work have important significance for studying galaxies, stars and quasars spectra classification.

  13. Router Agent Technology for Policy-Based Network Management

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Sudhir, Gurusham; Chang, Hsin-Ping; James, Mark; Liu, Yih-Chiao J.; Chiang, Winston

    2011-01-01

    This innovation can be run as a standalone network application on any computer in a networked environment. This design can be configured to control one or more routers (one instance per router), and can also be configured to listen to a policy server over the network to receive new policies based on the policy- based network management technology. The Router Agent Technology transforms the received policies into suitable Access Control List syntax for the routers it is configured to control. It commits the newly generated access control lists to the routers and provides feedback regarding any errors that were faced. The innovation also automatically generates a time-stamped log file regarding all updates to the router it is configured to control. This technology, once installed on a local network computer and started, is autonomous because it has the capability to keep listening to new policies from the policy server, transforming those policies to router-compliant access lists, and committing those access lists to a specified interface on the specified router on the network with any error feedback regarding commitment process. The stand-alone application is named RouterAgent and is currently realized as a fully functional (version 1) implementation for the Windows operating system and for CISCO routers.

  14. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling

    PubMed Central

    Groff, Elizabeth R.

    2014-01-01

    Objectives: The Journal of Research in Crime and Delinquency (JRCD) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity—agent-based computational modeling—that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Method: Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Results: Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Conclusion: Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs—not without its own issues—may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification. PMID:25419001

  15. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    PubMed

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency (JRCD) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  16. An agent-based approach to financial stylized facts

    NASA Astrophysics Data System (ADS)

    Shimokawa, Tetsuya; Suzuki, Kyoko; Misawa, Tadanobu

    2007-06-01

    An important challenge of the financial theory in recent years is to construct more sophisticated models which have consistencies with as many financial stylized facts that cannot be explained by traditional models. Recently, psychological studies on decision making under uncertainty which originate in Kahneman and Tversky's research attract a lot of interest as key factors which figure out the financial stylized facts. These psychological results have been applied to the theory of investor's decision making and financial equilibrium modeling. This paper, following these behavioral financial studies, would like to propose an agent-based equilibrium model with prospect theoretical features of investors. Our goal is to point out a possibility that loss-averse feature of investors explains vast number of financial stylized facts and plays a crucial role in price formations of financial markets. Price process which is endogenously generated through our model has consistencies with, not only the equity premium puzzle and the volatility puzzle, but great kurtosis, asymmetry of return distribution, auto-correlation of return volatility, cross-correlation between return volatility and trading volume. Moreover, by using agent-based simulations, the paper also provides a rigorous explanation from the viewpoint of a lack of market liquidity to the size effect, which means that small-sized stocks enjoy excess returns compared to large-sized stocks.

  17. From Compartmentalized to Agent-based Models of Epidemics

    NASA Astrophysics Data System (ADS)

    Macal, Charles

    Supporting decisions in the throes of an impending epidemic poses distinct technical challenges arising from the uncertainties in modeling disease propagation processes and the need for producing timely answers to policy questions. Compartmental models, because of their relative simplicity, produce timely information, but often do not include the level of fidelity of the information needed to answer specific policy questions. Highly granular agent-based simulations produce an extensive amount of information on all aspects of a simulated epidemic, yet complex models often cannot produce this information in a timely manner. We propose a two-phased approach to addressing the tradeoff between model complexity and the speed at which models can be used to answer to questions about an impending outbreak. In the first phase, in advance of an epidemic, ensembles of highly granular agent-based simulations are run over the entire parameter space, characterizing the space of possible model outcomes and uncertainties. Meta-models are derived that characterize model outcomes as dependent on uncertainties in disease parameters, data, and structural relationships. In the second phase, envisioned as during an epidemic, the meta-model is run in combination with compartmental models, which can be run very quickly. Model outcomes are compared as a basis for establishing uncertainties in model forecasts. This work is supported by the U.S. Department of Energy under Contract number DE-AC02-06CH11357 and National Science Foundation (NSF) RAPID Award DEB-1516428.

  18. Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis

    PubMed Central

    2016-01-01

    Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402

  19. Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.

    PubMed

    Kurhekar, Manish; Deshpande, Umesh

    2016-01-01

    Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website.

  20. Stromal-Based Signatures for the Classification of Gastric Cancer.

    PubMed

    Uhlik, Mark T; Liu, Jiangang; Falcon, Beverly L; Iyer, Seema; Stewart, Julie; Celikkaya, Hilal; O'Mahony, Marguerita; Sevinsky, Christopher; Lowes, Christina; Douglass, Larry; Jeffries, Cynthia; Bodenmiller, Diane; Chintharlapalli, Sudhakar; Fischl, Anthony; Gerald, Damien; Xue, Qi; Lee, Jee-Yun; Santamaria-Pang, Alberto; Al-Kofahi, Yousef; Sui, Yunxia; Desai, Keyur; Doman, Thompson; Aggarwal, Amit; Carter, Julia H; Pytowski, Bronislaw; Jaminet, Shou-Ching; Ginty, Fiona; Nasir, Aejaz; Nagy, Janice A; Dvorak, Harold F; Benjamin, Laura E

    2016-05-01

    Treatment of metastatic gastric cancer typically involves chemotherapy and monoclonal antibodies targeting HER2 (ERBB2) and VEGFR2 (KDR). However, reliable methods to identify patients who would benefit most from a combination of treatment modalities targeting the tumor stroma, including new immunotherapy approaches, are still lacking. Therefore, we integrated a mouse model of stromal activation and gastric cancer genomic information to identify gene expression signatures that may inform treatment strategies. We generated a mouse model in which VEGF-A is expressed via adenovirus, enabling a stromal response marked by immune infiltration and angiogenesis at the injection site, and identified distinct stromal gene expression signatures. With these data, we designed multiplexed IHC assays that were applied to human primary gastric tumors and classified each tumor to a dominant stromal phenotype representative of the vascular and immune diversity found in gastric cancer. We also refined the stromal gene signatures and explored their relation to the dominant patient phenotypes identified by recent large-scale studies of gastric cancer genomics (The Cancer Genome Atlas and Asian Cancer Research Group), revealing four distinct stromal phenotypes. Collectively, these findings suggest that a genomics-based systems approach focused on the tumor stroma can be used to discover putative predictive biomarkers of treatment response, especially to antiangiogenesis agents and immunotherapy, thus offering an opportunity to improve patient stratification. Cancer Res; 76(9); 2573-86. ©2016 AACR. ©2016 American Association for Cancer Research.