Science.gov

Sample records for agent based classification

  1. A Library Book Intelligence Classification System based on Multi-agent

    NASA Astrophysics Data System (ADS)

    Pengfei, Guo; Liangxian, Du; Junxia, Qi

    This paper introduces the concept of artificial intelligence into the administrative system of the library, and then gives the model of robot system in book classification based on multi-agent. The intelligent robot can recognize books' barcode automatically and here gives the classification algorithm according to the book classification of Chinese library. The algorithm can calculate the concrete position of the books, and relate with all similar books, thus the robot can put all congener books once without turning back.

  2. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  3. Mass classification in mammography with multi-agent based fusion of human and machine intelligence

    NASA Astrophysics Data System (ADS)

    Xi, Dongdong; Fan, Ming; Li, Lihua; Zhang, Juan; Shan, Yanna; Dai, Gang; Zheng, Bin

    2016-03-01

    Although the computer-aided diagnosis (CAD) system can be applied for classifying the breast masses, the effects of this method on improvement of the radiologist' accuracy for distinguishing malignant from benign lesions still remain unclear. This study provided a novel method to classify breast masses by integrating the intelligence of human and machine. In this research, 224 breast masses were selected in mammography from database of DDSM with Breast Imaging Reporting and Data System (BI-RADS) categories. Three observers (a senior and a junior radiologist, as well as a radiology resident) were employed to independently read and classify these masses utilizing the Positive Predictive Values (PPV) for each BI-RADS category. Meanwhile, a CAD system was also implemented for classification of these breast masses between malignant and benign. To combine the decisions from the radiologists and CAD, the fusion method of the Multi-Agent was provided. Significant improvements are observed for the fusion system over solely radiologist or CAD. The area under the receiver operating characteristic curve (AUC) of the fusion system increased by 9.6%, 10.3% and 21% compared to that of radiologists with senior, junior and resident level, respectively. In addition, the AUC of this method based on the fusion of each radiologist and CAD are 3.5%, 3.6% and 3.3% higher than that of CAD alone. Finally, the fusion of the three radiologists with CAD achieved AUC value of 0.957, which was 5.6% larger compared to CAD. Our results indicated that the proposed fusion method has better performance than radiologist or CAD alone.

  4. Multi-Agent Information Classification Using Dynamic Acquaintance Lists.

    ERIC Educational Resources Information Center

    Mukhopadhyay, Snehasis; Peng, Shengquan; Raje, Rajeev; Palakal, Mathew; Mostafa, Javed

    2003-01-01

    Discussion of automated information services focuses on information classification and collaborative agents, i.e. intelligent computer programs. Highlights include multi-agent systems; distributed artificial intelligence; thesauri; document representation and classification; agent modeling; acquaintances, or remote agents discovered through…

  5. PADMA: PArallel Data Mining Agents for scalable text classification

    SciTech Connect

    Kargupta, H.; Hamzaoglu, I.; Stafford, B.

    1997-03-01

    This paper introduces PADMA (PArallel Data Mining Agents), a parallel agent based system for scalable text classification. PADMA contains modules for (1) parallel data accessing operations, (2) parallel hierarchical clustering, and (3) web-based data visualization. This paper introduces the general architecture of PADMA and presents a detailed description of its different modules.

  6. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  7. Granular loess classification based

    SciTech Connect

    Browzin, B.S.

    1985-05-01

    This paper discusses how loess might be identified by two index properties: the granulometric composition and the dry unit weight. These two indices are necessary but not always sufficient for identification of loess. On the basis of analyses of samples from three continents, it was concluded that the 0.01-0.5-mm fraction deserves the name loessial fraction. Based on the loessial fraction concept, a granulometric classification of loess is proposed. A triangular chart is used to classify loess.

  8. Multi-agent Negotiation Mechanisms for Statistical Target Classification in Wireless Multimedia Sensor Networks

    PubMed Central

    Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng

    2007-01-01

    The recent availability of low cost and miniaturized hardware has allowed wireless sensor networks (WSNs) to retrieve audio and video data in real world applications, which has fostered the development of wireless multimedia sensor networks (WMSNs). Resource constraints and challenging multimedia data volume make development of efficient algorithms to perform in-network processing of multimedia contents imperative. This paper proposes solving problems in the domain of WMSNs from the perspective of multi-agent systems. The multi-agent framework enables flexible network configuration and efficient collaborative in-network processing. The focus is placed on target classification in WMSNs where audio information is retrieved by microphones. To deal with the uncertainties related to audio information retrieval, the statistical approaches of power spectral density estimates, principal component analysis and Gaussian process classification are employed. A multi-agent negotiation mechanism is specially developed to efficiently utilize limited resources and simultaneously enhance classification accuracy and reliability. The negotiation is composed of two phases, where an auction based approach is first exploited to allocate the classification task among the agents and then individual agent decisions are combined by the committee decision mechanism. Simulation experiments with real world data are conducted and the results show that the proposed statistical approaches and negotiation mechanism not only reduce memory and computation requirements in WMSNs but also significantly enhance classification accuracy and reliability.

  9. Classification-based reasoning

    NASA Technical Reports Server (NTRS)

    Gomez, Fernando; Segami, Carlos

    1991-01-01

    A representation formalism for N-ary relations, quantification, and definition of concepts is described. Three types of conditions are associated with the concepts: (1) necessary and sufficient properties, (2) contingent properties, and (3) necessary properties. Also explained is how complex chains of inferences can be accomplished by representing existentially quantified sentences, and concepts denoted by restrictive relative clauses as classification hierarchies. The representation structures that make possible the inferences are explained first, followed by the reasoning algorithms that draw the inferences from the knowledge structures. All the ideas explained have been implemented and are part of the information retrieval component of a program called Snowy. An appendix contains a brief session with the program.

  10. Optimal Infomation-based Classification

    NASA Astrophysics Data System (ADS)

    Hyun, Baro

    Classification is the allocation of an object to an existing category among several based on uncertain measurements. Since information is used to quantify uncertainty, it is natural to consider classification and information as complementary subjects. This dissertation touches upon several topics that relate to the problem of classification, such as information, classification, and team classification. Motivated by the U.S. Air Force Intelligence, Surveillance, and Reconnaissance missions, we investigate the aforementioned topics for classifiers that follow two models: classifiers with workload-independent and workload-dependent performance. We adopt workload-independence and dependence as "first-order" models to capture the features of machines and humans, respectively. We first investigate the relationship between information in the sense of Shannon and classification performance, which is defined as the probability of misclassification. We show that while there is a predominant congruence between them, there are cases when such congruence is violated. We show the phenomenon for both workload-independent and workload-dependent classifiers and investigate the cause of such phenomena analytically. One way of making classification decisions is by setting a threshold on a measured quantity. For instance, if a measurement falls on one side of the threshold, the object that provided the measurement is classified as one type, otherwise, it is of another type. Exploiting thresholding, we formalize a classifier with dichotomous decisions (i.e., with two options, such as true or false) given a single variable measurement. We further extend the formalization to classifiers with trichotomy (i.e., with three options, such as true, false or unknown) and with multivariate measurements. When a team of classifiers is considered, issues on how to exploit redundant numbers of classifiers arise. We analyze these classifiers under different architectures, such as parallel or nested

  11. Agent-Based Literacy Theory

    ERIC Educational Resources Information Center

    McEneaney, John E.

    2006-01-01

    The purpose of this theoretical essay is to explore the limits of traditional conceptualizations of reader and text and to propose a more general theory based on the concept of a literacy agent. The proposed theoretical perspective subsumes concepts from traditional theory and aims to account for literacy online. The agent-based literacy theory…

  12. Agent-based forward analysis

    SciTech Connect

    Kerekes, Ryan A.; Jiao, Yu; Shankar, Mallikarjun; Potok, Thomas E.; Lusk, Rick M.

    2008-01-01

    We propose software agent-based "forward analysis" for efficient information retrieval in a network of sensing devices. In our approach, processing is pushed to the data at the edge of the network via intelligent software agents rather than pulling data to a central facility for processing. The agents are deployed with a specific query and perform varying levels of analysis of the data, communicating with each other and sending only relevant information back across the network. We demonstrate our concept in the context of face recognition using a wireless test bed comprised of PDA cell phones and laptops. We show that agent-based forward analysis can provide a significant increase in retrieval speed while decreasing bandwidth usage and information overload at the central facility. n

  13. Projection Classification Based Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqiu; Li, Chen; Gao, Wenhua

    2015-05-01

    Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.

  14. Standoff lidar simulation for biological warfare agent detection, tracking, and classification

    NASA Astrophysics Data System (ADS)

    Jönsson, Erika; Steinvall, Ove; Gustafsson, Ove; Kullander, Fredrik; Jonsson, Per

    2010-04-01

    Lidar has been identified as a promising sensor for remote detection of biological warfare agents (BWA). Elastic IR lidar can be used for cloud detection at long ranges and UV laser induced fluorescence can be used for discrimination of BWA against naturally occurring aerosols. This paper will describe a simulation tool which enables the simulation of lidar for detection, tracking and classification of aerosol clouds. The cloud model was available from another project and has been integrated into the model. It takes into account the type of aerosol, type of release (plume or puff), amounts of BWA, winds, height above the ground and terrain roughness. The model input includes laser and receiver parameters for both the IR and UV channels as well as the optical parameters of the background, cloud and atmosphere. The wind and cloud conditions and terrain roughness are specified for the cloud simulation. The search area including the angular sampling resolution together with the IR laser pulse repetition frequency defines the search conditions. After cloud detection in the elastic mode, the cloud can be tracked using appropriate algorithms. In the tracking mode the classification using fluorescence spectral emission is simulated and tested using correlation against known spectra. Other methods for classification based on elastic backscatter are also discussed as well as the determination of particle concentration. The simulation estimates and displays the lidar response, cloud concentration as well as the goodness of fit for the classification using fluorescence.

  15. Review of therapeutic agents for burns pruritus and protocols for management in adult and paediatric patients using the GRADE classification

    PubMed Central

    Goutos, Ioannis; Clarke, Maria; Upson, Clara; Richardson, Patricia M.; Ghosh, Sudip J.

    2010-01-01

    To review the current evidence on therapeutic agents for burns pruritus and use the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) classification to propose therapeutic protocols for adult and paediatric patients. All published interventions for burns pruritus were analysed by a multidisciplinary panel of burns specialists following the GRADE classification to rate individual agents. Following the collation of results and panel discussion, consensus protocols are presented. Twenty-three studies appraising therapeutic agents in the burns literature were identified. The majority of these studies (16 out of 23) are of an observational nature, making an evidence-based approach to defining optimal therapy not feasible. Our multidisciplinary approach employing the GRADE classification recommends the use of antihistamines (cetirizine and cimetidine) and gabapentin as the first-line pharmacological agents for both adult and paediatric patients. Ondansetron and loratadine are the second-line medications in our protocols. We additionally recommend a variety of non-pharmacological adjuncts for the perusal of clinicians in order to maximise symptomatic relief in patients troubled with postburn itch. Most studies in the subject area lack sufficient statistical power to dictate a ‘gold standard’ treatment agent for burns itch. We encourage clinicians to employ the GRADE system in order to delineate the most appropriate therapeutic approach for burns pruritus until further research elucidates the most efficacious interventions. This widely adopted classification empowers burns clinicians to tailor therapeutic regimens according to current evidence, patient values, risks and resource considerations in different medical environments. PMID:21321658

  16. Agent Assignment for Process Management: Pattern Based Agent Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Jablonski, Stefan; Talib, Ramzan

    In almost all workflow management system the role concept is determined once at the introduction of workflow application and is not reevaluated to observe how successfully certain processes are performed by the authorized agents. This paper describes an approach which evaluates how agents are working successfully and feed this information back for future agent assignment to achieve maximum business benefit for the enterprise. The approach is called Pattern based Agent Performance Evaluation (PAPE) and is based on machine learning technique combined with post processing technique. We report on the result of our experiments and discuss issues and improvement of our approach.

  17. Orientation selectivity based structure for texture classification

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Lin, Weisi; Shi, Guangming; Zhang, Yazhong; Lu, Liu

    2014-10-01

    Local structure, e.g., local binary pattern (LBP), is widely used in texture classification. However, LBP is too sensitive to disturbance. In this paper, we introduce a novel structure for texture classification. Researches on cognitive neuroscience indicate that the primary visual cortex presents remarkable orientation selectivity for visual information extraction. Inspired by this, we investigate the orientation similarities among neighbor pixels, and propose an orientation selectivity based pattern for local structure description. Experimental results on texture classification demonstrate that the proposed structure descriptor is quite robust to disturbance.

  18. Nanoparticle-based theranostic agents

    PubMed Central

    Xie, Jin; Lee, Seulki; Chen, Xiaoyuan

    2010-01-01

    Theranostic nanomedicine is emerging as a promising therapeutic paradigm. It takes advantage of the high capacity of nanoplatforms to ferry cargo and loads onto them both imaging and therapeutic functions. The resulting nanosystems, capable of diagnosis, drug delivery and monitoring of therapeutic response, are expected to play a significant role in the dawning era of personalized medicine, and much research effort has been devoted toward that goal. A convenience in constructing such function-integrated agents is that many nanoplatforms are already, themselves, imaging agents. Their well developed surface chemistry makes it easy to load them with pharmaceutics and promote them to be theranostic nanosystems. Iron oxide nanoparticles, quantum dots, carbon nanotubes, gold nanoparticles and silica nanoparticles, have been previously well investigated in the imaging setting and are candidate nanoplatforms for building up nanoparticle-based theranostics. In the current article, we will outline the progress along this line, organized by the category of the core materials. We will focus on construction strategies and will discuss the challenges and opportunities associated with this emerging technology. PMID:20691229

  19. Agent-based enterprise integration

    SciTech Connect

    N. M. Berry; C. M. Pancerella

    1998-12-01

    The authors are developing and deploying software agents in an enterprise information architecture such that the agents manage enterprise resources and facilitate user interaction with these resources. The enterprise agents are built on top of a robust software architecture for data exchange and tool integration across heterogeneous hardware and software. The resulting distributed multi-agent system serves as a method of enhancing enterprises in the following ways: providing users with knowledge about enterprise resources and applications; accessing the dynamically changing enterprise; locating enterprise applications and services; and improving search capabilities for applications and data. Furthermore, agents can access non-agents (i.e., databases and tools) through the enterprise framework. The ultimate target of the effort is the user; they are attempting to increase user productivity in the enterprise. This paper describes their design and early implementation and discusses the planned future work.

  20. CATS-based Agents That Err

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.

    2002-01-01

    This report describes preliminary research on intelligent agents that make errors. Such agents are crucial to the development of novel agent-based techniques for assessing system safety. The agents extend an agent architecture derived from the Crew Activity Tracking System that has been used as the basis for air traffic controller agents. The report first reviews several error taxonomies. Next, it presents an overview of the air traffic controller agents, then details several mechanisms for causing the agents to err in realistic ways. The report presents a performance assessment of the error-generating agents, and identifies directions for further research. The research was supported by the System-Wide Accident Prevention element of the FAA/NASA Aviation Safety Program.

  1. Contour-based classification of video objects

    NASA Astrophysics Data System (ADS)

    Richter, Stephan; Kuehne, Gerald; Schuster, Oliver

    2000-12-01

    The recognition of objects that appear in a video sequence is an essential aspect of any video content analysis system. We present an approach which classifies a segmented video object base don its appearance in successive video frames. The classification is performed by matching curvature features of the contours of these object views to a database containing preprocessed views of prototypical objects using a modified curvature scale space technique. By integrating the result of an umber of successive frames and by using the modified curvature scale space technique as an efficient representation of object contours, our approach enables the robust, tolerant and rapid object classification of video objects.

  2. Contour-based classification of video objects

    NASA Astrophysics Data System (ADS)

    Richter, Stephan; Kuehne, Gerald; Schuster, Oliver

    2001-01-01

    The recognition of objects that appear in a video sequence is an essential aspect of any video content analysis system. We present an approach which classifies a segmented video object base don its appearance in successive video frames. The classification is performed by matching curvature features of the contours of these object views to a database containing preprocessed views of prototypical objects using a modified curvature scale space technique. By integrating the result of an umber of successive frames and by using the modified curvature scale space technique as an efficient representation of object contours, our approach enables the robust, tolerant and rapid object classification of video objects.

  3. Land classification based on hydrological landscape units

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Fenicia, F.; Hrachowitz, M.; Savenije, H. H. G.

    2011-05-01

    This paper presents a new type of hydrological landscape classification based on dominant runoff mechanisms. Three landscape classes are distinguished: wetland, hillslope and plateau, corresponding to three dominant hydrological regimes: saturation excess overland flow, storage excess sub-surface flow, and deep percolation. Topography, geology and land use hold the key to identifying these landscapes. The height above the nearest drain (HAND) and the surface slope, which can be readily obtained from a digital elevation model, appear to be the dominant topographical parameters for hydrological classification. In this paper several indicators for classification are tested as well as their sensitivity to scale and sample size. It appears that the best results are obtained by the simple use of HAND and slope. The results obtained compare well with field observations and the topographical wetness index. The new approach appears to be an efficient method to "read the landscape" on the basis of which conceptual models can be developed.

  4. Adaptive learning based heartbeat classification.

    PubMed

    Srinivas, M; Basil, Tony; Mohan, C Krishna

    2015-01-01

    Cardiovascular diseases (CVD) are a leading cause of unnecessary hospital admissions as well as fatalities placing an immense burden on the healthcare industry. A process to provide timely intervention can reduce the morbidity rate as well as control rising costs. Patients with cardiovascular diseases require quick intervention. Towards that end, automated detection of abnormal heartbeats captured by electronic cardiogram (ECG) signals is vital. While cardiologists can identify different heartbeat morphologies quite accurately among different patients, the manual evaluation is tedious and time consuming. In this chapter, we propose new features from the time and frequency domains and furthermore, feature normalization techniques to reduce inter-patient and intra-patient variations in heartbeat cycles. Our results using the adaptive learning based classifier emulate those reported in existing literature and in most cases deliver improved performance, while eliminating the need for labeling of signals by domain experts. PMID:26484555

  5. An agent based model of genotype editing

    SciTech Connect

    Rocha, L. M.; Huang, C. F.

    2004-01-01

    This paper presents our investigation on an agent-based model of Genotype Editing. This model is based on several characteristics that are gleaned from the RNA editing system as observed in several organisms. The incorporation of editing mechanisms in an evolutionary agent-based model provides a means for evolving agents with heterogenous post-transcriptional processes. The study of this agent-based genotype-editing model has shed some light into the evolutionary implications of RNA editing as well as established an advantageous evolutionary computation algorithm for machine learning. We expect that our proposed model may both facilitate determining the evolutionary role of RNA editing in biology, and advance the current state of research in agent-based optimization.

  6. Lightcurve Based Classification Of Transients Events

    NASA Astrophysics Data System (ADS)

    Donalek, Ciro; Graham, M. J.; Mahabal, A.; Djorgovski, S. G.; Drake, A. J.; Moghaddam, B.; Turmon, M.; Chen, Y.; Sharma, N.

    2012-01-01

    In many scientific fields, a new generation of instruments is generating exponentially growing data streams, that may enable significant new discoveries. The requirement to perform the analysis rapidly and objectively, coupled with the huge amount of data available, implies a need for an automated event detection, classification, and decision making. In astronomy, this is the case with the new generation of synoptic sky surveys, that discover an ever increasing number of transient events. However, not all of them are equally interesting and worthy of a follow-up with limited resources. This presents some unusual classification challenges: the data are sparse, heterogeneous and incomplete; evolving in time; and most of the relevant information comes from a variety of archival data and contextual information. We are exploring a variety of machine learning techniques, using the ongoing CRTS sky survey as a testbed: Bayesian Network, [dm,dt] histograms, Decision Trees, Neural Networks, Symbolic Regression. In this work we focus on the lightcurve based classification using an hierarchical approach where some astrophysically motivated major features are used to separate different groups of classes. Proceeding down the classification hierarchy every node uses those classifiers that are demonstrated to work best for that particular task.

  7. [Spectral classification based on Bayes decision].

    PubMed

    Liu, Rong; Jin, Hong-Mei; Duan, Fu-Qing

    2010-03-01

    The rapid development of astronomical observation has led to many large sky surveys such as SDSS (Sloan digital sky survey) and LAMOST (large sky area multi-object spectroscopic telescope). Since these surveys have produced very large numbers of spectra, automated spectral analysis becomes desirable and necessary. The present paper studies the spectral classification method based on Bayes decision theory, which divides spectra into three types: star, galaxy and quasar. Firstly, principal component analysis (PCA) is used in feature extraction, and spectra are projected into the 3D PCA feature space; secondly, the class conditional probability density functions are estimated using the non-parametric density estimation technique, Parzen window approach; finally, the minimum error Bayes decision rule is used for classification. In Parzen window approach, the kernel width affects the density estimation, and then affects the classification effect. Extensive experiments have been performed to analyze the relationship between the kernel widths and the correct classification rates. The authors found that the correct rate increases with the kernel width being close to some threshold, while it decreases with the kernel width being less than this threshold. PMID:20496722

  8. Detection and classification of organophosphate nerve agent simulants using support vector machines with multiarray sensors.

    PubMed

    Sadik, Omowunmi; Land, Walker H; Wanekaya, Adam K; Uematsu, Michiko; Embrechts, Mark J; Wong, Lut; Leibensperger, Dale; Volykin, Alex

    2004-01-01

    The need for rapid and accurate detection systems is expanding and the utilization of cross-reactive sensor arrays to detect chemical warfare agents in conjunction with novel computational techniques may prove to be a potential solution to this challenge. We have investigated the detection, prediction, and classification of various organophosphate (OP) nerve agent simulants using sensor arrays with a novel learning scheme known as support vector machines (SVMs). The OPs tested include parathion, malathion, dichlorvos, trichlorfon, paraoxon, and diazinon. A new data reduction software program was written in MATLAB V. 6.1 to extract steady-state and kinetic data from the sensor arrays. The program also creates training sets by mixing and randomly sorting any combination of data categories into both positive and negative cases. The resulting signals were fed into SVM software for "pairwise" and "one" vs all classification. Experimental results for this new paradigm show a significant increase in classification accuracy when compared to artificial neural networks (ANNs). Three kernels, the S2000, the polynomial, and the Gaussian radial basis function (RBF), were tested and compared to the ANN. The following measures of performance were considered in the pairwise classification: receiver operating curve (ROC) Az indices, specificities, and positive predictive values (PPVs). The ROC Az) values, specifities, and PPVs increases ranged from 5% to 25%, 108% to 204%, and 13% to 54%, respectively, in all OP pairs studied when compared to the ANN baseline. Dichlorvos, trichlorfon, and paraoxon were perfectly predicted. Positive prediction for malathion was 95%. PMID:15032529

  9. [Automatic classification method of star spectrum data based on classification pattern tree].

    PubMed

    Zhao, Xu-Jun; Cai, Jiang-Hui; Zhang, Ji-Fu; Yang, Hai-Feng; Ma, Yang

    2013-10-01

    Frequent pattern, frequently appearing in the data set, plays an important role in data mining. For the stellar spectrum classification tasks, a classification rule mining method based on classification pattern tree is presented on the basis of frequent pattern. The procedures can be shown as follows. Firstly, a new tree structure, i. e., classification pattern tree, is introduced based on the different frequencies of stellar spectral attributes in data base and its different importance used for classification. The related concepts and the construction method of classification pattern tree are also described in this paper. Then, the characteristics of the stellar spectrum are mapped to the classification pattern tree. Two modes of top-to-down and bottom-to-up are used to traverse the classification pattern tree and extract the classification rules. Meanwhile, the concept of pattern capability is introduced to adjust the number of classification rules and improve the construction efficiency of the classification pattern tree. Finally, the SDSS (the Sloan Digital Sky Survey) stellar spectral data provided by the National Astronomical Observatory are used to verify the accuracy of the method. The results show that a higher classification accuracy has been got. PMID:24409754

  10. Development of a rapid method for the automatic classification of biological agents' fluorescence spectral signatures

    NASA Astrophysics Data System (ADS)

    Carestia, Mariachiara; Pizzoferrato, Roberto; Gelfusa, Michela; Cenciarelli, Orlando; Ludovici, Gian Marco; Gabriele, Jessica; Malizia, Andrea; Murari, Andrea; Vega, Jesus; Gaudio, Pasquale

    2015-11-01

    Biosecurity and biosafety are key concerns of modern society. Although nanomaterials are improving the capacities of point detectors, standoff detection still appears to be an open issue. Laser-induced fluorescence of biological agents (BAs) has proved to be one of the most promising optical techniques to achieve early standoff detection, but its strengths and weaknesses are still to be fully investigated. In particular, different BAs tend to have similar fluorescence spectra due to the ubiquity of biological endogenous fluorophores producing a signal in the UV range, making data analysis extremely challenging. The Universal Multi Event Locator (UMEL), a general method based on support vector regression, is commonly used to identify characteristic structures in arrays of data. In the first part of this work, we investigate fluorescence emission spectra of different simulants of BAs and apply UMEL for their automatic classification. In the second part of this work, we elaborate a strategy for the application of UMEL to the discrimination of different BAs' simulants spectra. Through this strategy, it has been possible to discriminate between these BAs' simulants despite the high similarity of their fluorescence spectra. These preliminary results support the use of SVR methods to classify BAs' spectral signatures.

  11. Voxel classification based airway tree segmentation

    NASA Astrophysics Data System (ADS)

    Lo, Pechin; de Bruijne, Marleen

    2008-03-01

    This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.

  12. Texture feature based liver lesion classification

    NASA Astrophysics Data System (ADS)

    Doron, Yeela; Mayer-Wolf, Nitzan; Diamant, Idit; Greenspan, Hayit

    2014-03-01

    Liver lesion classification is a difficult clinical task. Computerized analysis can support clinical workflow by enabling more objective and reproducible evaluation. In this paper, we evaluate the contribution of several types of texture features for a computer-aided diagnostic (CAD) system which automatically classifies liver lesions from CT images. Based on the assumption that liver lesions of various classes differ in their texture characteristics, a variety of texture features were examined as lesion descriptors. Although texture features are often used for this task, there is currently a lack of detailed research focusing on the comparison across different texture features, or their combinations, on a given dataset. In this work we investigated the performance of Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), Gabor, gray level intensity values and Gabor-based LBP (GLBP), where the features are obtained from a given lesion`s region of interest (ROI). For the classification module, SVM and KNN classifiers were examined. Using a single type of texture feature, best result of 91% accuracy, was obtained with Gabor filtering and SVM classification. Combination of Gabor, LBP and Intensity features improved the results to a final accuracy of 97%.

  13. Assurance in Agent-Based Systems

    SciTech Connect

    Gilliom, Laura R.; Goldsmith, Steven Y.

    1999-05-10

    Our vision of the future of information systems is one that includes engineered collectives of software agents which are situated in an environment over years and which increasingly improve the performance of the overall system of which they are a part. At a minimum, the movement of agent and multi-agent technology into National Security applications, including their use in information assurance, is apparent today. The use of deliberative, autonomous agents in high-consequence/high-security applications will require a commensurate level of protection and confidence in the predictability of system-level behavior. At Sandia National Laboratories, we have defined and are addressing a research agenda that integrates the surety (safety, security, and reliability) into agent-based systems at a deep level. Surety is addressed at multiple levels: The integrity of individual agents must be protected by addressing potential failure modes and vulnerabilities to malevolent threats. Providing for the surety of the collective requires attention to communications surety issues and mechanisms for identifying and working with trusted collaborators. At the highest level, using agent-based collectives within a large-scale distributed system requires the development of principled design methods to deliver the desired emergent performance or surety characteristics. This position paper will outline the research directions underway at Sandia, will discuss relevant work being performed elsewhere, and will report progress to date toward assurance in agent-based systems.

  14. Classification based on full decision trees

    NASA Astrophysics Data System (ADS)

    Genrikhov, I. E.; Djukova, E. V.

    2012-04-01

    The ideas underlying a series of the authors' studies dealing with the design of classification algorithms based on full decision trees are further developed. It is shown that the decision tree construction under consideration takes into account all the features satisfying a branching criterion. Full decision trees with an entropy branching criterion are studied as applied to precedent-based pattern recognition problems with real-valued data. Recognition procedures are constructed for solving problems with incomplete data (gaps in the feature descriptions of the objects) in the case when the learning objects are nonuniformly distributed over the classes. The authors' basic results previously obtained in this area are overviewed.

  15. Ladar-based terrain cover classification

    NASA Astrophysics Data System (ADS)

    Macedo, Jose; Manduchi, Roberto; Matthies, Larry H.

    2001-09-01

    An autonomous vehicle driving in a densely vegetated environment needs to be able to discriminate between obstacles (such as rocks) and penetrable vegetation (such as tall grass). We propose a technique for terrain cover classification based on the statistical analysis of the range data produced by a single-axis laser rangefinder (ladar). We first present theoretical models for the range distribution in the presence of homogeneously distributed grass and of obstacles partially occluded by grass. We then validate our results with real-world cases, and propose a simple algorithm to robustly discriminate between vegetation and obstacles based on the local statistical analysis of the range data.

  16. Ecology Based Decentralized Agent Management System

    NASA Technical Reports Server (NTRS)

    Peysakhov, Maxim D.; Cicirello, Vincent A.; Regli, William C.

    2004-01-01

    The problem of maintaining a desired number of mobile agents on a network is not trivial, especially if we want a completely decentralized solution. Decentralized control makes a system more r e bust and less susceptible to partial failures. The problem is exacerbated on wireless ad hoc networks where host mobility can result in significant changes in the network size and topology. In this paper we propose an ecology-inspired approach to the management of the number of agents. The approach associates agents with living organisms and tasks with food. Agents procreate or die based on the abundance of uncompleted tasks (food). We performed a series of experiments investigating properties of such systems and analyzed their stability under various conditions. We concluded that the ecology based metaphor can be successfully applied to the management of agent populations on wireless ad hoc networks.

  17. Digital image-based classification of biodiesel.

    PubMed

    Costa, Gean Bezerra; Fernandes, David Douglas Sousa; Almeida, Valber Elias; Araújo, Thomas Souto Policarpo; Melo, Jessica Priscila; Diniz, Paulo Henrique Gonçalves Dias; Véras, Germano

    2015-07-01

    This work proposes a simple, rapid, inexpensive, and non-destructive methodology based on digital images and pattern recognition techniques for classification of biodiesel according to oil type (cottonseed, sunflower, corn, or soybean). For this, differing color histograms in RGB (extracted from digital images), HSI, Grayscale channels, and their combinations were used as analytical information, which was then statistically evaluated using Soft Independent Modeling by Class Analogy (SIMCA), Partial Least Squares Discriminant Analysis (PLS-DA), and variable selection using the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). Despite good performances by the SIMCA and PLS-DA classification models, SPA-LDA provided better results (up to 95% for all approaches) in terms of accuracy, sensitivity, and specificity for both the training and test sets. The variables selected Successive Projections Algorithm clearly contained the information necessary for biodiesel type classification. This is important since a product may exhibit different properties, depending on the feedstock used. Such variations directly influence the quality, and consequently the price. Moreover, intrinsic advantages such as quick analysis, requiring no reagents, and a noteworthy reduction (the avoidance of chemical characterization) of waste generation, all contribute towards the primary objective of green chemistry. PMID:25882407

  18. Integration of multi-array sensors and support vector machines for the detection and classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Sadik, Omowunmi A.; Embrechts, Mark J.; Leibensperger, Dale; Wong, Lut; Wanekaya, Adam; Uematsu, Michiko

    2003-08-01

    Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. Furthermore, recent events have highlighted awareness that chemical and biological agents (CBAs) may become the preferred, cheap alternative WMD, because these agents can effectively attack large populations while leaving infrastructures intact. Despite the availability of numerous sensing devices, intelligent hybrid sensors that can detect and degrade CBAs are virtually nonexistent. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using parathion and dichlorvos as model stimulants compounds. SVMs were used for the design and evaluation of new and more accurate data extraction, preprocessing and classification. Experimental results for the paradigms developed using Structural Risk Minimization, show a significant increase in classification accuracy when compared to the existing AromaScan baseline system. Specifically, the results of this research has demonstrated that, for the Parathion versus Dichlorvos pair, when compared to the AromaScan baseline system: (1) a 23% improvement in the overall ROC Az index using the S2000 kernel, with similar improvements with the Gaussian and polynomial (of degree 2) kernels, (2) a significant 173% improvement in specificity with the S2000 kernel. This means that the number of false negative errors were reduced by 173%, while making no false positive errors, when compared to the AromaScan base line performance. (3) The Gaussian and polynomial kernels demonstrated similar specificity at 100% sensitivity. All SVM classifiers provided essentially perfect classification performance for the Dichlorvos versus Trichlorfon pair. For the most difficult classification task, the Parathion versus

  19. Brain extraction based on locally linear representation-based classification.

    PubMed

    Huang, Meiyan; Yang, Wei; Jiang, Jun; Wu, Yao; Zhang, Yu; Chen, Wufan; Feng, Qianjin

    2014-05-15

    Brain extraction is an important procedure in brain image analysis. Although numerous brain extraction methods have been presented, enhancing brain extraction methods remains challenging because brain MRI images exhibit complex characteristics, such as anatomical variability and intensity differences across different sequences and scanners. To address this problem, we present a Locally Linear Representation-based Classification (LLRC) method for brain extraction. A novel classification framework is derived by introducing the locally linear representation to the classical classification model. Under this classification framework, a common label fusion approach can be considered as a special case and thoroughly interpreted. Locality is important to calculate fusion weights for LLRC; this factor is also considered to determine that Local Anchor Embedding is more applicable in solving locally linear coefficients compared with other linear representation approaches. Moreover, LLRC supplies a way to learn the optimal classification scores of the training samples in the dictionary to obtain accurate classification. The International Consortium for Brain Mapping and the Alzheimer's Disease Neuroimaging Initiative databases were used to build a training dataset containing 70 scans. To evaluate the proposed method, we used four publicly available datasets (IBSR1, IBSR2, LPBA40, and ADNI3T, with a total of 241 scans). Experimental results demonstrate that the proposed method outperforms the four common brain extraction methods (BET, BSE, GCUT, and ROBEX), and is comparable to the performance of BEaST, while being more accurate on some datasets compared with BEaST. PMID:24525169

  20. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  1. Cirrhosis Classification Based on Texture Classification of Random Features

    PubMed Central

    Shao, Ying; Guo, Dongmei; Zheng, Yuanjie; Zhao, Zuowei; Qiu, Tianshuang

    2014-01-01

    Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD) can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM) features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage). CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM. PMID:24707317

  2. Multimodal based classification of schizophrenia patients.

    PubMed

    Cetin, Mustafa S; Houck, Jon M; Vergara, Victor M; Miller, Robyn L; Calhoun, Vince

    2015-01-01

    Schizophrenia is currently diagnosed by physicians through clinical assessment and their evaluation of patient's self-reported experiences over the longitudinal course of the illness. There is great interest in identifying biologically based markers at the onset of illness, rather than relying on the evolution of symptoms across time. Functional network connectivity shows promise in providing individual subject predictive power. The majority of previous studies considered the analysis of functional connectivity during resting-state using only fMRI. However, exclusive reliance on fMRI to generate such networks, may limit inference on dysfunctional connectivity, which is hypothesized to underlie patient symptoms. In this work, we propose a framework for classification of schizophrenia patients and healthy control subjects based on using both fMRI and band limited envelope correlation metrics in MEG to interrogate functional network components in the resting state. Our results show that the combination of these two methods provide valuable information that captures fundamental characteristics of brain network connectivity in schizophrenia. Such information is useful for prediction of schizophrenia patients. Classification accuracy performance was improved significantly (up to ≈ 7%) relative to only the fMRI method and (up to ≈ 21%) relative to only the MEG method. PMID:26736831

  3. Patterns of Use of an Agent-Based Model and a System Dynamics Model: The Application of Patterns of Use and the Impacts on Learning Outcomes

    ERIC Educational Resources Information Center

    Thompson, Kate; Reimann, Peter

    2010-01-01

    A classification system that was developed for the use of agent-based models was applied to strategies used by school-aged students to interrogate an agent-based model and a system dynamics model. These were compared, and relationships between learning outcomes and the strategies used were also analysed. It was found that the classification system…

  4. Agent Based Modeling Applications for Geosciences

    NASA Astrophysics Data System (ADS)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  5. Graph-based Methods for Orbit Classification

    SciTech Connect

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  6. NISAC Agent Based Laboratory for Economics

    SciTech Connect

    Downes, Paula; Davis, Chris; Eidson, Eric; Ehlen, Mark; Gieseler, Charles; Harris, Richard

    2006-10-11

    The software provides large-scale microeconomic simulation of complex economic and social systems (such as supply chain and market dynamics of businesses in the US economy) and their dependence on physical infrastructure systems. The system is based on Agent simulation, where each entity of inteest in the system to be modeled (for example, a Bank, individual firms, Consumer households, etc.) is specified in a data-driven sense to be individually repreented by an Agent. The Agents interact using rules of interaction appropriate to their roles, and through those interactions complex economic and social dynamics emerge. The software is implemented in three tiers, a Java-based visualization client, a C++ control mid-tier, and a C++ computational tier.

  7. NISAC Agent Based Laboratory for Economics

    Energy Science and Technology Software Center (ESTSC)

    2006-10-11

    The software provides large-scale microeconomic simulation of complex economic and social systems (such as supply chain and market dynamics of businesses in the US economy) and their dependence on physical infrastructure systems. The system is based on Agent simulation, where each entity of inteest in the system to be modeled (for example, a Bank, individual firms, Consumer households, etc.) is specified in a data-driven sense to be individually repreented by an Agent. The Agents interactmore » using rules of interaction appropriate to their roles, and through those interactions complex economic and social dynamics emerge. The software is implemented in three tiers, a Java-based visualization client, a C++ control mid-tier, and a C++ computational tier.« less

  8. Text Classification Using ESC-Based Stochastic Decision Lists.

    ERIC Educational Resources Information Center

    Li, Hang; Yamanishi, Kenji

    2002-01-01

    Proposes a new method of text classification using stochastic decision lists, ordered sequences of IF-THEN-ELSE rules. The method can be viewed as a rule-based method for text classification having advantages of readability and refinability of acquired knowledge. Advantages of rule-based methods over non-rule-based ones are empirically verified.…

  9. Classification techniques based on AI application to defect classification in cast aluminum

    NASA Astrophysics Data System (ADS)

    Platero, Carlos; Fernandez, Carlos; Campoy, Pascual; Aracil, Rafael

    1994-11-01

    This paper describes the Artificial Intelligent techniques applied to the interpretation process of images from cast aluminum surface presenting different defects. The whole process includes on-line defect detection, feature extraction and defect classification. These topics are discussed in depth through the paper. Data preprocessing process, as well as segmentation and feature extraction are described. At this point, algorithms employed along with used descriptors are shown. Syntactic filter has been developed to modelate the information and to generate the input vector to the classification system. Classification of defects is achieved by means of rule-based systems, fuzzy models and neural nets. Different classification subsystems perform together for the resolution of a pattern recognition problem (hybrid systems). Firstly, syntactic methods are used to obtain the filter that reduces the dimension of the input vector to the classification process. Rule-based classification is achieved associating a grammar to each defect type; the knowledge-base will be formed by the information derived from the syntactic filter along with the inferred rules. The fuzzy classification sub-system uses production rules with fuzzy antecedent and their consequents are ownership rates to every defect type. Different architectures of neural nets have been implemented with different results, as shown along the paper. In the higher classification level, the information given by the heterogeneous systems as well as the history of the process is supplied to an Expert System in order to drive the casting process.

  10. Hydrological Land Classification Based on Landscape Units

    NASA Astrophysics Data System (ADS)

    Gharari, S.; hrachowitz, M.; Fenicia, F.; Savenije, H.

    2011-12-01

    Landscape classification in meaningful hydrological units has important implications for hydrological modeling. Conceptual hydrological models, such as HBV- type models, are most commonly designed to represent catchments in a lumped or semi-distributed way at best, i.e. treating them as single entities or sometimes accounting for topographical and land cover variability by introducing some level of stratification. These oversimplifications can frequently lead to substantial misrepresentations of flow generating processes in the catchments in question, as feedback processes between topography, land cover and hydrology in different landscape units are poorly represented. By making use of readily available topographical information, hydrological units can be identified based on the concept of ''Height above Nearest Drainage'' (HAND; Rennó et al., 2008). These units are characterized by distinct hydrological behavior, and they can be represented using different model structures (Savenije, 2010). We selected the Wark Catchment in Grand Duchy of Luxembourg and identified three landscape units: plateau, wetland and hillslope. The original HAND was compared to other, similar models for landscape classification, which make use of other topographical indicators. The models were applied to a 5±5 m2 DEM, and were tested using data collected in the field. The comparison between the models showed that HAND is a more appropriate hydrological descriptor than other models. The map of the classified landscape was set in a probabilistic framework and was then used to determine the proportion of the individual units in the catchment. Different model structures were then assigned to the individual units and were used to model total runoff.

  11. FIPA agent based network distributed control system

    SciTech Connect

    D. Abbott; V. Gyurjyan; G. Heyes; E. Jastrzembski; C. Timmer; E. Wolin

    2003-03-01

    A control system with the capabilities to combine heterogeneous control systems or processes into a uniform homogeneous environment is discussed. This dynamically extensible system is an example of the software system at the agent level of abstraction. This level of abstraction considers agents as atomic entities that communicate to implement the functionality of the control system. Agents' engineering aspects are addressed by adopting the domain independent software standard, formulated by FIPA. Jade core Java classes are used as a FIPA specification implementation. A special, lightweight, XML RDFS based, control oriented, ontology markup language is developed to standardize the description of the arbitrary control system data processor. Control processes, described in this language, are integrated into the global system at runtime, without actual programming. Fault tolerance and recovery issues are also addressed.

  12. Structure-based algorithms for microvessel classification

    PubMed Central

    Smith, Amy F.; Secomb, Timothy W.; Pries, Axel R.; Smith, Nicolas P.; Shipley, Rebecca J.

    2014-01-01

    Objective Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries and venules. PMID:25403335

  13. Classification of CMEs Based on Their Dynamics

    NASA Astrophysics Data System (ADS)

    Nicewicz, J.; Michalek, G.

    2016-05-01

    A large set of coronal mass ejections CMEs (6621) has been selected to study their dynamics seen with the Large Angle and Spectroscopic Coronagraph (LASCO) onboard the Solar and Heliospheric Observatory (SOHO) field of view (LFOV). These events were selected based on having at least six height-time measurements so that their dynamic properties, in the LFOV, can be evaluated with reasonable accuracy. Height-time measurements (in the SOHO/LASCO catalog) were used to determine the velocities and accelerations of individual CMEs at successive distances from the Sun. Linear and quadratic functions were fitted to these data points. On the basis of the best fits to the velocity data points, we were able to classify CMEs into four groups. The types of CMEs do not only have different dynamic behaviors but also different masses, widths, velocities, and accelerations. We also show that these groups of events are initiated by different onset mechanisms. The results of our study allow us to present a consistent classification of CMEs based on their dynamics.

  14. Multiscale agent-based consumer market modeling.

    SciTech Connect

    North, M. J.; Macal, C. M.; St. Aubin, J.; Thimmapuram, P.; Bragen, M.; Hahn, J.; Karr, J.; Brigham, N.; Lacy, M. E.; Hampton, D.; Decision and Information Sciences; Procter & Gamble Co.

    2010-05-01

    Consumer markets have been studied in great depth, and many techniques have been used to represent them. These have included regression-based models, logit models, and theoretical market-level models, such as the NBD-Dirichlet approach. Although many important contributions and insights have resulted from studies that relied on these models, there is still a need for a model that could more holistically represent the interdependencies of the decisions made by consumers, retailers, and manufacturers. When the need is for a model that could be used repeatedly over time to support decisions in an industrial setting, it is particularly critical. Although some existing methods can, in principle, represent such complex interdependencies, their capabilities might be outstripped if they had to be used for industrial applications, because of the details this type of modeling requires. However, a complementary method - agent-based modeling - shows promise for addressing these issues. Agent-based models use business-driven rules for individuals (e.g., individual consumer rules for buying items, individual retailer rules for stocking items, or individual firm rules for advertizing items) to determine holistic, system-level outcomes (e.g., to determine if brand X's market share is increasing). We applied agent-based modeling to develop a multi-scale consumer market model. We then conducted calibration, verification, and validation tests of this model. The model was successfully applied by Procter & Gamble to several challenging business problems. In these situations, it directly influenced managerial decision making and produced substantial cost savings.

  15. Classification

    NASA Astrophysics Data System (ADS)

    Oza, Nikunj

    2012-03-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. A set of training examples— examples with known output values—is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate’s measurements. The generalization performance of a learned model (how closely the target outputs and the model’s predicted outputs agree for patterns that have not been presented to the learning algorithm) would provide an indication of how well the model has learned the desired mapping. More formally, a classification learning algorithm L takes a training set T as its input. The training set consists of |T| examples or instances. It is assumed that there is a probability distribution D from which all training examples are drawn independently—that is, all the training examples are independently and identically distributed (i.i.d.). The ith training example is of the form (x_i, y_i), where x_i is a vector of values of several features and y_i represents the class to be predicted.* In the sunspot classification example given above, each training example

  16. Agent Based Intelligence in a Tetrahedral Rover

    NASA Technical Reports Server (NTRS)

    Phelps, Peter; Truszkowski, Walt

    2007-01-01

    A tetrahedron is a 4-node 6-strut pyramid structure which is being used by the NASA - Goddard Space Flight Center as the basic building block for a new approach to robotic motion. The struts are extendable; it is by the sequence of activities: strut-extension, changing the center of gravity and falling that the tetrahedron "moves". Currently, strut-extension is handled by human remote control. There is an effort underway to make the movement of the tetrahedron autonomous, driven by an attempt to achieve a goal. The approach being taken is to associate an intelligent agent with each node. Thus, the autonomous tetrahedron is realized as a constrained multi-agent system, where the constraints arise from the fact that between any two agents there is an extendible strut. The hypothesis of this work is that, by proper composition of such automated tetrahedra, robotic structures of various levels of complexity can be developed which will support more complex dynamic motions. This is the basis of the new approach to robotic motion which is under investigation. A Java-based simulator for the single tetrahedron, realized as a constrained multi-agent system, has been developed and evaluated. This paper reports on this project and presents a discussion of the structure and dynamics of the simulator.

  17. Agent-Based Modeling in Systems Pharmacology.

    PubMed

    Cosgrove, J; Butler, J; Alden, K; Read, M; Kumar, V; Cucurull-Sanchez, L; Timmis, J; Coles, M

    2015-11-01

    Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent-based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM-specific strengths have yielded success in the area of preclinical mechanistic modeling. PMID:26783498

  18. Hyperspectral imagery classification based on relevance vector machines

    NASA Astrophysics Data System (ADS)

    Yang, Guopeng; Yu, Xuchu; Feng, Wufa; Xu, Weixiao; Zhang, Pengqiang

    2009-10-01

    The relevance vector machine is sparse model in the Bayesian framework, its mathematics model doesn't have regularization coefficient and its kernel functions don't need to satisfy Mercer's condition. RVM present the good generalization performance, and its predictions are probabilistic. In this paper, a hyperspectral imagery classification method based on the relevance machine is brought forward. We introduce the sparse Bayesian classification model, regard the RVM learning as the maximization of marginal likelihood, and select the fast sequential sparse Bayesian learning algorithm. Through the experiment of PHI imagery classification, the advantages of the relevance machine used in hyperspectral imagery classification are given out.

  19. CATS-based Air Traffic Controller Agents

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.

    2002-01-01

    This report describes intelligent agents that function as air traffic controllers. Each agent controls traffic in a single sector in real time; agents controlling traffic in adjoining sectors can coordinate to manage an arrival flow across a given meter fix. The purpose of this research is threefold. First, it seeks to study the design of agents for controlling complex systems. In particular, it investigates agent planning and reactive control functionality in a dynamic environment in which a variety perceptual and decision making skills play a central role. It examines how heuristic rules can be applied to model planning and decision making skills, rather than attempting to apply optimization methods. Thus, the research attempts to develop intelligent agents that provide an approximation of human air traffic controller behavior that, while not based on an explicit cognitive model, does produce task performance consistent with the way human air traffic controllers operate. Second, this research sought to extend previous research on using the Crew Activity Tracking System (CATS) as the basis for intelligent agents. The agents use a high-level model of air traffic controller activities to structure the control task. To execute an activity in the CATS model, according to the current task context, the agents reference a 'skill library' and 'control rules' that in turn execute the pattern recognition, planning, and decision-making required to perform the activity. Applying the skills enables the agents to modify their representation of the current control situation (i.e., the 'flick' or 'picture'). The updated representation supports the next activity in a cycle of action that, taken as a whole, simulates air traffic controller behavior. A third, practical motivation for this research is to use intelligent agents to support evaluation of new air traffic control (ATC) methods to support new Air Traffic Management (ATM) concepts. Current approaches that use large, human

  20. Preliminary Research on Grassland Fine-classification Based on MODIS

    NASA Astrophysics Data System (ADS)

    Hu, Z. W.; Zhang, S.; Yu, X. Y.; Wang, X. S.

    2014-03-01

    Grassland ecosystem is important for climatic regulation, maintaining the soil and water. Research on the grassland monitoring method could provide effective reference for grassland resource investigation. In this study, we used the vegetation index method for grassland classification. There are several types of climate in China. Therefore, we need to use China's Main Climate Zone Maps and divide the study region into four climate zones. Based on grassland classification system of the first nation-wide grass resource survey in China, we established a new grassland classification system which is only suitable for this research. We used MODIS images as the basic data resources, and use the expert classifier method to perform grassland classification. Based on the 1:1,000,000 Grassland Resource Map of China, we obtained the basic distribution of all the grassland types and selected 20 samples evenly distributed in each type, then used NDVI/EVI product to summarize different spectral features of different grassland types. Finally, we introduced other classification auxiliary data, such as elevation, accumulate temperature (AT), humidity index (HI) and rainfall. China's nation-wide grassland classification map is resulted by merging the grassland in different climate zone. The overall classification accuracy is 60.4%. The result indicated that expert classifier is proper for national wide grassland classification, but the classification accuracy need to be improved.

  1. A Curriculum-Based Classification System for Community Colleges.

    ERIC Educational Resources Information Center

    Schuyler, Gwyer

    2003-01-01

    Proposes and tests a community college classification system based on curricular characteristics and their association with institutional characteristics. Seeks readily available data correlates to represent percentage of a college's course offerings that are in the liberal arts. A simple two-category classification system using total enrollment…

  2. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  3. Classification

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2011-01-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. In supervised learning, a set of training examples---examples with known output values---is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate's measurements. This chapter discusses methods to perform machine learning, with examples involving astronomy.

  4. Robust spike classification based on frequency domain neural waveform features

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Yuan, Yuan; Si, Jennie

    2013-12-01

    Objective. We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. Approach. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. Main results. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. Significance. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm

  5. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  6. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  7. Behavior Based Social Dimensions Extraction for Multi-Label Classification.

    PubMed

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes' behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes' connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  8. Error Generation in CATS-Based Agents

    NASA Technical Reports Server (NTRS)

    Callantine, Todd

    2003-01-01

    This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.

  9. Tensor Modeling Based for Airborne LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    Li, N.; Liu, C.; Pfeifer, N.; Yin, J. F.; Liao, Z. Y.; Zhou, Y.

    2016-06-01

    Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the "raw" data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  10. Better image texture recognition based on SVM classification

    NASA Astrophysics Data System (ADS)

    Liu, Kuan; Lu, Bin; Wei, Yaxun

    2013-10-01

    Texture classification is very important in remote sensing images, X-ray photos, cell image interpretation and processing, and is also the active research areas of computer vision, image processing, image analysis, image retrieval, and so on. As to spatial domain image, texture analysis can use statistical methods to calculate the texture feature vector. In this paper, we use the gray level co-occurrence matrix and Gabor filter feature vector to calculate the feature vector. For the feature vector classification under normal circumstances we can use Bayesian method, KNN method, BP neural network. In this paper, we use a statistical classification method which is based on SVM method to classify images. Image classification generally includes image preprocessing, image feature extraction, image feature selection and image classification in four steps. In this paper, we use a gray-scale image, by calculating the image gray level cooccurrence matrix and Gabor filtering method to get feature extraction, and then use SVM to training and classification. From the test results, it shows that the SVM method is the better way to solve the problem of texture features for image classification and it shows strong adaptability and robustness for image classification.

  11. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection. PMID:26353275

  12. A Classification-based Review Recommender

    NASA Astrophysics Data System (ADS)

    O'Mahony, Michael P.; Smyth, Barry

    Many online stores encourage their users to submit product/service reviews in order to guide future purchasing decisions. These reviews are often listed alongside product recommendations but, to date, limited attention has been paid as to how best to present these reviews to the end-user. In this paper, we describe a supervised classification approach that is designed to identify and recommend the most helpful product reviews. Using the TripAdvisor service as a case study, we compare the performance of several classification techniques using a range of features derived from hotel reviews. We then describe how these classifiers can be used as the basis for a practical recommender that automatically suggests the mosthelpful contrasting reviews to end-users. We present an empirical evaluation which shows that our approach achieves a statistically significant improvement over alternative review ranking schemes.

  13. Classification

    ERIC Educational Resources Information Center

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  14. Epiretinal membrane: optical coherence tomography-based diagnosis and classification

    PubMed Central

    Stevenson, William; Prospero Ponce, Claudia M; Agarwal, Daniel R; Gelman, Rachel; Christoforidis, John B

    2016-01-01

    Epiretinal membrane (ERM) is a disorder of the vitreomacular interface characterized by symptoms of decreased visual acuity and metamorphopsia. The diagnosis and classification of ERM has traditionally been based on clinical examination findings. However, modern optical coherence tomography (OCT) has proven to be more sensitive than clinical examination for the diagnosis of ERM. Furthermore, OCT-derived findings, such as central foveal thickness and inner segment ellipsoid band integrity, have shown clinical relevance in the setting of ERM. To date, no OCT-based ERM classification scheme has been widely accepted for use in clinical practice and investigation. Herein, we review the pathogenesis, diagnosis, and classification of ERMs and propose an OCT-based ERM classification system. PMID:27099458

  15. Epiretinal membrane: optical coherence tomography-based diagnosis and classification.

    PubMed

    Stevenson, William; Prospero Ponce, Claudia M; Agarwal, Daniel R; Gelman, Rachel; Christoforidis, John B

    2016-01-01

    Epiretinal membrane (ERM) is a disorder of the vitreomacular interface characterized by symptoms of decreased visual acuity and metamorphopsia. The diagnosis and classification of ERM has traditionally been based on clinical examination findings. However, modern optical coherence tomography (OCT) has proven to be more sensitive than clinical examination for the diagnosis of ERM. Furthermore, OCT-derived findings, such as central foveal thickness and inner segment ellipsoid band integrity, have shown clinical relevance in the setting of ERM. To date, no OCT-based ERM classification scheme has been widely accepted for use in clinical practice and investigation. Herein, we review the pathogenesis, diagnosis, and classification of ERMs and propose an OCT-based ERM classification system. PMID:27099458

  16. EXTENDING AQUATIC CLASSIFICATION TO THE LANDSCAPE SCALE HYDROLOGY-BASED STRATEGIES

    EPA Science Inventory

    Aquatic classification of single water bodies (lakes, wetlands, estuaries) is often based on geologic origin, while stream classification has relied on multiple factors related to landform, geomorphology, and soils. We have developed an approach to aquatic classification based o...

  17. Agent Based Modeling as an Educational Tool

    NASA Astrophysics Data System (ADS)

    Fuller, J. H.; Johnson, R.; Castillo, V.

    2012-12-01

    Motivation is a key element in high school education. One way to improve motivation and provide content, while helping address critical thinking and problem solving skills, is to have students build and study agent based models in the classroom. This activity visually connects concepts with their applied mathematical representation. "Engaging students in constructing models may provide a bridge between frequently disconnected conceptual and mathematical forms of knowledge." (Levy and Wilensky, 2011) We wanted to discover the feasibility of implementing a model based curriculum in the classroom given current and anticipated core and content standards.; Simulation using California GIS data ; Simulation of high school student lunch popularity using aerial photograph on top of terrain value map.

  18. Comparison and analysis of biological agent category lists based on biosafety and biodefense.

    PubMed

    Tian, Deqiao; Zheng, Tao

    2014-01-01

    Biological agents pose a serious threat to human health, economic development, social stability and even national security. The classification of biological agents is a basic requirement for both biosafety and biodefense. We compared and analyzed the Biological Agent Laboratory Biosafety Category list and the defining criteria according to the World Health Organization (WHO), the National Institutes of Health (NIH), the European Union (EU) and China. We also compared and analyzed the Biological Agent Biodefense Category list and the defining criteria according to the Centers for Disease Control and Prevention (CDC) of the United States, the EU and Russia. The results show some inconsistencies among or between the two types of category lists and criteria. We suggest that the classification of biological agents based on laboratory biosafety should reduce the number of inconsistencies and contradictions. Developing countries should also produce lists of biological agents to direct their development of biodefense capabilities.To develop a suitable biological agent list should also strengthen international collaboration and cooperation. PMID:24979754

  19. Comparison and Analysis of Biological Agent Category Lists Based On Biosafety and Biodefense

    PubMed Central

    Tian, Deqiao; Zheng, Tao

    2014-01-01

    Biological agents pose a serious threat to human health, economic development, social stability and even national security. The classification of biological agents is a basic requirement for both biosafety and biodefense. We compared and analyzed the Biological Agent Laboratory Biosafety Category list and the defining criteria according to the World Health Organization (WHO), the National Institutes of Health (NIH), the European Union (EU) and China. We also compared and analyzed the Biological Agent Biodefense Category list and the defining criteria according to the Centers for Disease Control and Prevention (CDC) of the United States, the EU and Russia. The results show some inconsistencies among or between the two types of category lists and criteria. We suggest that the classification of biological agents based on laboratory biosafety should reduce the number of inconsistencies and contradictions. Developing countries should also produce lists of biological agents to direct their development of biodefense capabilities.To develop a suitable biological agent list should also strengthen international collaboration and cooperation. PMID:24979754

  20. Agent-based models of financial markets

    NASA Astrophysics Data System (ADS)

    Samanidou, E.; Zschischang, E.; Stauffer, D.; Lux, T.

    2007-03-01

    This review deals with several microscopic ('agent-based') models of financial markets which have been studied by economists and physicists over the last decade: Kim-Markowitz, Levy-Levy-Solomon, Cont-Bouchaud, Solomon-Weisbuch, Lux-Marchesi, Donangelo-Sneppen and Solomon-Levy-Huang. After an overview of simulation approaches in financial economics, we first give a summary of the Donangelo-Sneppen model of monetary exchange and compare it with related models in economics literature. Our selective review then outlines the main ingredients of some influential early models of multi-agent dynamics in financial markets (Kim-Markowitz, Levy-Levy-Solomon). As will be seen, these contributions draw their inspiration from the complex appearance of investors' interactions in real-life markets. Their main aim is to reproduce (and, thereby, provide possible explanations) for the spectacular bubbles and crashes seen in certain historical episodes, but they lack (like almost all the work before 1998 or so) a perspective in terms of the universal statistical features of financial time series. In fact, awareness of a set of such regularities (power-law tails of the distribution of returns, temporal scaling of volatility) only gradually appeared over the nineties. With the more precise description of the formerly relatively vague characteristics (e.g. moving from the notion of fat tails to the more concrete one of a power law with index around three), it became clear that financial market dynamics give rise to some kind of universal scaling law. Showing similarities with scaling laws for other systems with many interacting sub-units, an exploration of financial markets as multi-agent systems appeared to be a natural consequence. This topic has been pursued by quite a number of contributions appearing in both the physics and economics literature since the late nineties. From the wealth of different flavours of multi-agent models that have appeared up to now, we discuss the Cont

  1. Agent-based modeling in ecological economics.

    PubMed

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems. PMID:20146761

  2. Agent Based Model of Livestock Movements

    NASA Astrophysics Data System (ADS)

    Miron, D. J.; Emelyanova, I. V.; Donald, G. E.; Garner, G. M.

    The modelling of livestock movements within Australia is of national importance for the purposes of the management and control of exotic disease spread, infrastructure development and the economic forecasting of livestock markets. In this paper an agent based model for the forecasting of livestock movements is presented. This models livestock movements from farm to farm through a saleyard. The decision of farmers to sell or buy cattle is often complex and involves many factors such as climate forecast, commodity prices, the type of farm enterprise, the number of animals available and associated off-shore effects. In this model the farm agent's intelligence is implemented using a fuzzy decision tree that utilises two of these factors. These two factors are the livestock price fetched at the last sale and the number of stock on the farm. On each iteration of the model farms choose either to buy, sell or abstain from the market thus creating an artificial supply and demand. The buyers and sellers then congregate at the saleyard where livestock are auctioned using a second price sealed bid. The price time series output by the model exhibits properties similar to those found in real livestock markets.

  3. Agent-based modeling of complex infrastructures

    SciTech Connect

    North, M. J.

    2001-06-01

    Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.

  4. Spatial prior in SVM-based classification of brain images

    NASA Astrophysics Data System (ADS)

    Cuingnet, Rémi; Chupin, Marie; Benali, Habib; Colliot, Olivier

    2010-03-01

    This paper introduces a general framework for spatial prior in SVM-based classification of brain images based on Laplacian regularization. Most existing methods include spatial prior by adding a feature aggregation step before the SVM classification. The problem of the aggregation step is that the individual information of each feature is lost. Our framework enables to avoid this shortcoming by including the spatial prior directly in the SVM. We demonstrate that this framework can be used to derive embedded regularization corresponding to existing methods for classification of brain images and propose an efficient way to implement them. This framework is illustrated on the classification of MR images from 55 patients with Alzheimer's disease and 82 elderly controls selected from the ADNI database. The results demonstrate that the proposed algorithm enables introducing straightforward and anatomically consistent spatial prior into the classifier.

  5. A Human Gait Classification Method Based on Radar Doppler Spectrograms

    NASA Astrophysics Data System (ADS)

    Tivive, Fok Hing Chi; Bouzerdoum, Abdesselam; Amin, Moeness G.

    2010-12-01

    An image classification technique, which has recently been introduced for visual pattern recognition, is successfully applied for human gait classification based on radar Doppler signatures depicted in the time-frequency domain. The proposed method has three processing stages. The first two stages are designed to extract Doppler features that can effectively characterize human motion based on the nature of arm swings, and the third stage performs classification. Three types of arm motion are considered: free-arm swings, one-arm confined swings, and no-arm swings. The last two arm motions can be indicative of a human carrying objects or a person in stressed situations. The paper discusses the different steps of the proposed method for extracting distinctive Doppler features and demonstrates their contributions to the final and desirable classification rates.

  6. Bazhenov Fm Classification Based on Wireline Logs

    NASA Astrophysics Data System (ADS)

    Simonov, D. A.; Baranov, V.; Bukhanov, N.

    2016-03-01

    This paper considers the main aspects of Bazhenov Formation interpretation and application of machine learning algorithms for the Kolpashev type section of the Bazhenov Formation, application of automatic classification algorithms that would change the scale of research from small to large. Machine learning algorithms help interpret the Bazhenov Formation in a reference well and in other wells. During this study, unsupervised and supervised machine learning algorithms were applied to interpret lithology and reservoir properties. This greatly simplifies the routine problem of manual interpretation and has an economic effect on the cost of laboratory analysis.

  7. Wavelet-based asphalt concrete texture grading and classification

    NASA Astrophysics Data System (ADS)

    Almuntashri, Ali; Agaian, Sos

    2011-03-01

    In this Paper, we introduce a new method for evaluation, quality control, and automatic grading of texture images representing different textural classes of Asphalt Concrete (AC). Also, we present a new asphalt concrete texture grading, wavelet transform, fractal, and Support Vector Machine (SVM) based automatic classification and recognition system. Experimental results were simulated using different cross-validation techniques and achieved an average classification accuracy of 91.4.0 % in a set of 150 images belonging to five different texture grades.

  8. Improvement of unsupervised texture classification based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Togami, Yuuki; Arai, Kohei

    2004-11-01

    At the previous conference, the authors are proposed a new unsupervised texture classification method based on the genetic algorithms (GA). In the method, the GA are employed to determine location and size of the typical textures in the target image. The proposed method consists of the following procedures: 1) the determination of the number of classification category; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed; 6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. However, this method has not been automated because it requires not only target image but also the number of categories for classification. In this paper, we describe some improvement for implementation of automated texture classification. Some experiments are conducted to evaluate classification capability of the proposed method by using images from Brodatz's photo album and actual airborne multispectral scanner. The experimental results show that the proposed method can select appropriate texture samples and can provide reasonable classification results.

  9. From Agents to Continuous Change via Aesthetics: Learning Mechanics with Visual Agent-Based Computational Modeling

    ERIC Educational Resources Information Center

    Sengupta, Pratim; Farris, Amy Voss; Wright, Mason

    2012-01-01

    Novice learners find motion as a continuous process of change challenging to understand. In this paper, we present a pedagogical approach based on agent-based, visual programming to address this issue. Integrating agent-based programming, in particular, Logo programming, with curricular science has been shown to be challenging in previous research…

  10. Ebolavirus classification based on natural vectors.

    PubMed

    Zheng, Hui; Yin, Changchuan; Hoang, Tung; He, Rong Lucy; Yang, Jie; Yau, Stephen S-T

    2015-06-01

    According to the WHO, ebolaviruses have resulted in 8818 human deaths in West Africa as of January 2015. To better understand the evolutionary relationship of the ebolaviruses and infer virulence from the relationship, we applied the alignment-free natural vector method to classify the newest ebolaviruses. The dataset includes three new Guinea viruses as well as 99 viruses from Sierra Leone. For the viruses of the family of Filoviridae, both genus label classification and species label classification achieve an accuracy rate of 100%. We represented the relationships among Filoviridae viruses by Unweighted Pair Group Method with Arithmetic Mean (UPGMA) phylogenetic trees and found that the filoviruses can be separated well by three genera. We performed the phylogenetic analysis on the relationship among different species of Ebolavirus by their coding-complete genomes and seven viral protein genes (glycoprotein [GP], nucleoprotein [NP], VP24, VP30, VP35, VP40, and RNA polymerase [L]). The topology of the phylogenetic tree by the viral protein VP24 shows consistency with the variations of virulence of ebolaviruses. The result suggests that VP24 be a pharmaceutical target for treating or preventing ebolaviruses. PMID:25803489

  11. Knowledge Management in Role Based Agents

    NASA Astrophysics Data System (ADS)

    Kır, Hüseyin; Ekinci, Erdem Eser; Dikenelli, Oguz

    In multi-agent system literature, the role concept is getting increasingly researched to provide an abstraction to scope beliefs, norms, goals of agents and to shape relationships of the agents in the organization. In this research, we propose a knowledgebase architecture to increase applicability of roles in MAS domain by drawing inspiration from the self concept in the role theory of sociology. The proposed knowledgebase architecture has granulated structure that is dynamically organized according to the agent's identification in a social environment. Thanks to this dynamic structure, agents are enabled to work on consistent knowledge in spite of inevitable conflicts between roles and the agent. The knowledgebase architecture is also implemented and incorporated into the SEAGENT multi-agent system development framework.

  12. Who's your neighbor? neighbor identification for agent-based modeling.

    SciTech Connect

    Macal, C. M.; Howe, T. R.; Decision and Information Sciences; Univ. of Chicago

    2006-01-01

    Agent-based modeling and simulation, based on the cellular automata paradigm, is an approach to modeling complex systems comprised of interacting autonomous agents. Open questions in agent-based simulation focus on scale-up issues encountered in simulating large numbers of agents. Specifically, how many agents can be included in a workable agent-based simulation? One of the basic tenets of agent-based modeling and simulation is that agents only interact and exchange locally available information with other agents located in their immediate proximity or neighborhood of the space in which the agents are situated. Generally, an agent's set of neighbors changes rapidly as a simulation proceeds through time and as the agents move through space. Depending on the topology defined for agent interactions, proximity may be defined by spatial distance for continuous space, adjacency for grid cells (as in cellular automata), or by connectivity in social networks. Identifying an agent's neighbors is a particularly time-consuming computational task and can dominate the computational effort in a simulation. Two challenges in agent simulation are (1) efficiently representing an agent's neighborhood and the neighbors in it and (2) efficiently identifying an agent's neighbors at any time in the simulation. These problems are addressed differently for different agent interaction topologies. While efficient approaches have been identified for agent neighborhood representation and neighbor identification for agents on a lattice with general neighborhood configurations, other techniques must be used when agents are able to move freely in space. Techniques for the analysis and representation of spatial data are applicable to the agent neighbor identification problem. This paper extends agent neighborhood simulation techniques from the lattice topology to continuous space, specifically R2. Algorithms based on hierarchical (quad trees) or non-hierarchical data structures (grid cells) are

  13. An Immune Agent for Web-Based AI Course

    ERIC Educational Resources Information Center

    Gong, Tao; Cai, Zixing

    2006-01-01

    To overcome weakness and faults of a web-based e-learning course such as Artificial Intelligence (AI), an immune agent was proposed, simulating a natural immune mechanism against a virus. The immune agent was built on the multi-dimension education agent model and immune algorithm. The web-based AI course was comprised of many files, such as HTML…

  14. An Active Learning Exercise for Introducing Agent-Based Modeling

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  15. SVM based target classification using RCS feature vectors

    NASA Astrophysics Data System (ADS)

    Bufler, Travis D.; Narayanan, Ram M.; Dogaru, Traian

    2015-05-01

    This paper investigates the application of SVM (Support Vector Machines) for the classification of stationary human targets and indoor clutter via spectral features. Applying Finite Difference Time Domain (FDTD) techniques allows us to examine the radar cross section (RCS) of humans and indoor clutter objects by utilizing different types of computer models. FDTD allows for the spectral characteristics to be acquired over a wide range of frequencies, polarizations, aspect angles, and materials. The acquired target and clutter RCS spectral characteristics are then investigated in terms of their potential for target classification using SVMs. Based upon variables such as frequency and polarization, a SVM classifier can be trained to classify unknown targets as a human or clutter. Furthermore, the application of feature selection is applied to the spectral characteristics to determine the SVM classification accuracy of a reduced dataset. Classification accuracies of nearly 90% are achieved using radial and polynomial kernels.

  16. Ensemble polarimetric SAR image classification based on contextual sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  17. Classification of LiDAR Data with Point Based Classification Methods

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2016-06-01

    LiDAR is one of the most effective systems for 3 dimensional (3D) data collection in wide areas. Nowadays, airborne LiDAR data is used frequently in various applications such as object extraction, 3D modelling, change detection and revision of maps with increasing point density and accuracy. The classification of the LiDAR points is the first step of LiDAR data processing chain and should be handled in proper way since the 3D city modelling, building extraction, DEM generation, etc. applications directly use the classified point clouds. The different classification methods can be seen in recent researches and most of researches work with the gridded LiDAR point cloud. In grid based data processing of the LiDAR data, the characteristic point loss in the LiDAR point cloud especially vegetation and buildings or losing height accuracy during the interpolation stage are inevitable. In this case, the possible solution is the use of the raw point cloud data for classification to avoid data and accuracy loss in gridding process. In this study, the point based classification possibilities of the LiDAR point cloud is investigated to obtain more accurate classes. The automatic point based approaches, which are based on hierarchical rules, have been proposed to achieve ground, building and vegetation classes using the raw LiDAR point cloud data. In proposed approaches, every single LiDAR point is analyzed according to their features such as height, multi-return, etc. then automatically assigned to the class which they belong to. The use of un-gridded point cloud in proposed point based classification process helped the determination of more realistic rule sets. The detailed parameter analyses have been performed to obtain the most appropriate parameters in the rule sets to achieve accurate classes. The hierarchical rule sets were created for proposed Approach 1 (using selected spatial-based and echo-based features) and Approach 2 (using only selected spatial-based features

  18. Pathological Bases for a Robust Application of Cancer Molecular Classification

    PubMed Central

    Diaz-Cano, Salvador J.

    2015-01-01

    Any robust classification system depends on its purpose and must refer to accepted standards, its strength relying on predictive values and a careful consideration of known factors that can affect its reliability. In this context, a molecular classification of human cancer must refer to the current gold standard (histological classification) and try to improve it with key prognosticators for metastatic potential, staging and grading. Although organ-specific examples have been published based on proteomics, transcriptomics and genomics evaluations, the most popular approach uses gene expression analysis as a direct correlate of cellular differentiation, which represents the key feature of the histological classification. RNA is a labile molecule that varies significantly according with the preservation protocol, its transcription reflect the adaptation of the tumor cells to the microenvironment, it can be passed through mechanisms of intercellular transference of genetic information (exosomes), and it is exposed to epigenetic modifications. More robust classifications should be based on stable molecules, at the genetic level represented by DNA to improve reliability, and its analysis must deal with the concept of intratumoral heterogeneity, which is at the origin of tumor progression and is the byproduct of the selection process during the clonal expansion and progression of neoplasms. The simultaneous analysis of multiple DNA targets and next generation sequencing offer the best practical approach for an analytical genomic classification of tumors. PMID:25898411

  19. A clinically applicable molecular-based classification for endometrial cancers

    PubMed Central

    Talhouk, A; McConechy, M K; Leung, S; Li-Chang, H H; Kwon, J S; Melnyk, N; Yang, W; Senz, J; Boyd, N; Karnezis, A N; Huntsman, D G; Gilks, C B; McAlpine, J N

    2015-01-01

    Background: Classification of endometrial carcinomas (ECs) by morphologic features is inconsistent, and yields limited prognostic and predictive information. A new system for classification based on the molecular categories identified in The Cancer Genome Atlas is proposed. Methods: Genomic data from the Cancer Genome Atlas (TCGA) support classification of endometrial carcinomas into four prognostically significant subgroups; we used the TCGA data set to develop surrogate assays that could replicate the TCGA classification, but without the need for the labor-intensive and cost-prohibitive genomic methodology. Combinations of the most relevant assays were carried forward and tested on a new independent cohort of 152 endometrial carcinoma cases, and molecular vs clinical risk group stratification was compared. Results: Replication of TCGA survival curves was achieved with statistical significance using multiple different molecular classification models (16 total tested). Internal validation supported carrying forward a classifier based on the following components: mismatch repair protein immunohistochemistry, POLE mutational analysis and p53 immunohistochemistry as a surrogate for ‘copy-number' status. The proposed molecular classifier was associated with clinical outcomes, as was stage, grade, lymph-vascular space invasion, nodal involvement and adjuvant treatment. In multivariable analysis both molecular classification and clinical risk groups were associated with outcomes, but differed greatly in composition of cases within each category, with half of POLE and mismatch repair loss subgroups residing within the clinically defined ‘high-risk' group. Combining the molecular classifier with clinicopathologic features or risk groups provided the highest C-index for discrimination of outcome survival curves. Conclusions: Molecular classification of ECs can be achieved using clinically applicable methods on formalin-fixed paraffin-embedded samples, and provides

  20. Multiclass microarray data classification based on confidence evaluation.

    PubMed

    Yu, H L; Gao, S; Qin, B; Zhao, J

    2012-01-01

    Microarray technology is becoming a powerful tool for clinical diagnosis, as it has potential to discover gene expression patterns that are characteristic for a particular disease. To date, this possibility has received much attention in the context of cancer research, especially in tumor classification. However, most published articles have concentrated on the development of binary classification methods while neglected ubiquitous multiclass problems. Unfortunately, only a few multiclass classification approaches have had poor predictive accuracy. In an effort to improve classification accuracy, we developed a novel multiclass microarray data classification method. First, we applied a "one versus rest-support vector machine" to classify the samples. Then the classification confidence of each testing sample was evaluated according to its distribution in feature space and some with poor confidence were extracted. Next, a novel strategy, which we named as "class priority estimation method based on centroid distance", was used to make decisions about categories for those poor confidence samples. This approach was tested on seven benchmark multiclass microarray datasets, with encouraging results, demonstrating effectiveness and feasibility. PMID:22653582

  1. A new circulation type classification based upon Lagrangian air trajectories

    NASA Astrophysics Data System (ADS)

    Ramos, Alexandre; Sprenger, Michael; Wernli, Heini; Durán-Quesada, Ana María; Lorenzo, Maria Nieves; Gimeno, Luis

    2014-10-01

    A new classification method of the large-scale circulation characteristic for a specific target area (NW Iberian Peninsula) is presented, based on the analysis of 90-h backward trajectories arriving in this area calculated with the 3-D Lagrangian particle dispersion model FLEXPART. A cluster analysis is applied to separate the backward trajectories in up to five representative air streams for each day. Specific measures are then used to characterise the distinct air streams (e.g., curvature of the trajectories, cyclonic or anticyclonic flow, moisture evolution, origin and length of the trajectories). The robustness of the presented method is demonstrated in comparison with the Eulerian Lamb weather type classification. A case study of the 2003 heatwave is discussed in terms of the new Lagrangian circulation and the Lamb weather type classifications. It is shown that the new classification method adds valuable information about the pertinent meteorological conditions, which are missing in an Eulerian approach. The new method is climatologically evaluated for the five-year time period from December 1999 to November 2004. The ability of the method to capture the inter-seasonal circulation variability in the target region is shown. Furthermore, the multi-dimensional character of the classification is shortly discussed, in particular with respect to inter-seasonal differences. Finally, the relationship between the new Lagrangian classification and the precipitation in the target area is studied.

  2. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  3. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  4. Rank preserving sparse learning for Kinect based scene classification.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong

    2013-10-01

    With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification. PMID:23846511

  5. Multiple kernel learning for sparse representation-based classification.

    PubMed

    Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama

    2014-07-01

    In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226

  6. An Agent-Based Data Mining System for Ontology Evolution

    NASA Astrophysics Data System (ADS)

    Hadzic, Maja; Dillon, Darshan

    We have developed an evidence-based mental health ontological model that represents mental health in multiple dimensions. The ongoing addition of new mental health knowledge requires a continual update of the Mental Health Ontology. In this paper, we describe how the ontology evolution can be realized using a multi-agent system in combination with data mining algorithms. We use the TICSA methodology to design this multi-agent system which is composed of four different types of agents: Information agent, Data Warehouse agent, Data Mining agents and Ontology agent. We use UML 2.1 sequence diagrams to model the collaborative nature of the agents and a UML 2.1 composite structure diagram to model the structure of individual agents. The Mental Heath Ontology has the potential to underpin various mental health research experiments of a collaborative nature which are greatly needed in times of increasing mental distress and illness.

  7. Validating agent based models through virtual worlds.

    SciTech Connect

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  8. A Spitzer-based classification of TNOs

    NASA Astrophysics Data System (ADS)

    Cooper, J. R.; Dalle Ore, C. M.; Emery, J. P.

    2011-12-01

    The outer reaches of the Solar System are residence to the icy bodies known as trans-Neptunian objects (TNOs). Implications such as low albedo and size have left this field relatively unexplored and in turn, encouraged the pursuit of these far-orbiting objects. A database of 48 objects was used by Fulchignoni et al. (2008) to cluster, model, and analyze the various spectra into classified taxa. The dataset adopted by Fulchignoni et al. (2008) was used as a baseline for visual colors to which Dalle Ore et al. (in prep) provided the significance of adding albedo measurements taken from Stansberry et al (2008). To further the classification accuracy, two near-infrared color bands from the Spitzer Space Telescope, centered at 3.55 and 4.50 microns, were supplemented with the previous 7-filter photometry. The 9-band compilation produced altered results from the previous studies; the addition of Spitzer data hopes to distinguish varying compositional properties of icy objects. We present a redefined taxonomy that may uncover clues to evolutionary trends of the TNO population.

  9. Atmospheric circulation classification comparison based on wildfires in Portugal

    NASA Astrophysics Data System (ADS)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  10. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  11. Space Situational Awareness using Market Based Agents

    NASA Astrophysics Data System (ADS)

    Sullivan, C.; Pier, E.; Gregory, S.; Bush, M.

    2012-09-01

    Space surveillance for the DoD is not limited to the Space Surveillance Network (SSN). Other DoD-owned assets have some existing capabilities for tasking but have no systematic way to work collaboratively with the SSN. These are run by diverse organizations including the Services, other defense and intelligence agencies and national laboratories. Beyond these organizations, academic and commercial entities have systems that possess SSA capability. Most all of these assets have some level of connectivity, security, and potential autonomy. Exploiting them in a mutually beneficial structure could provide a more comprehensive, efficient and cost effective solution for SSA. The collection of all potential assets, providers and consumers of SSA data comprises a market which is functionally illiquid. The development of a dynamic marketplace for SSA data could enable would-be providers the opportunity to sell data to SSA consumers for monetary or incentive based compensation. A well-conceived market architecture could drive down SSA data costs through increased supply and improve efficiency through increased competition. Oceanit will investigate market and market agent architectures, protocols, standards, and incentives toward producing high-volume/low-cost SSA.

  12. Directional wavelet based features for colonic polyp classification.

    PubMed

    Wimmer, Georg; Tamaki, Toru; Tischendorf, J J W; Häfner, Michael; Yoshida, Shigeto; Tanaka, Shinji; Uhl, Andreas

    2016-07-01

    In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state

  13. Efficient Classification-Based Relabeling in Mixture Models

    PubMed Central

    Cron, Andrew J.; West, Mike

    2011-01-01

    Effective component relabeling in Bayesian analyses of mixture models is critical to the routine use of mixtures in classification with analysis based on Markov chain Monte Carlo methods. The classification-based relabeling approach here is computationally attractive and statistically effective, and scales well with sample size and number of mixture components concordant with enabling routine analyses of increasingly large data sets. Building on the best of existing methods, practical relabeling aims to match data:component classification indicators in MCMC iterates with those of a defined reference mixture distribution. The method performs as well as or better than existing methods in small dimensional problems, while being practically superior in problems with larger data sets as the approach is scalable. We describe examples and computational benchmarks, and provide supporting code with efficient computational implementation of the algorithm that will be of use to others in practical applications of mixture models. PMID:21660126

  14. Robust materials classification based on multispectral polarimetric BRDF imagery

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Zhao, Yong-qiang; Luo, Li; Liu, Dan; Pan, Quan

    2009-07-01

    When light is reflected from object surface, its spectral characteristics will be affected by surface's elemental composition, while its polarimetric characteristics will be determined by the surface's orientation, roughness and conductance. Multispectral polarimetric imaging technique records both the spectral and polarimetric characteristics of the light, and adds dimensions to the spatial intensity typically acquired and it also could provide unique and discriminatory information which may argument material classification techniques. But for the sake of non-Lambert of object surface, the spectral and polarimetric characteristics will change along with the illumination angle and observation angle. If BRDF is ignored during the material classification, misclassification is inevitable. To get a feature that is robust material classification to non-Lambert surface, a new classification methods based on multispectral polarimetric BRDF characteristics is proposed in this paper. Support Vector Machine method is adopted to classify targets in clutter grass environments. The train sets are obtained in the sunny, while the test sets are got from three different weather and detected conditions, at last the classification results based on multispectral polarimetric BRDF features are compared with other two results based on spectral information, and multispectral polarimetric information under sunny, cloudy and dark conditions respectively. The experimental results present that the method based on multispectral polarimetric BRDF features performs the most robust, and the classification precision also surpasses the other two. When imaging objects under the dark weather, it's difficult to distinguish different materials using spectral features as the grays between backgrounds and targets in each different wavelength would be very close, but the method proposed in this paper would efficiently solve this problem.

  15. Agent Persuasion Mechanism of Acquaintance

    NASA Astrophysics Data System (ADS)

    Jinghua, Wu; Wenguang, Lu; Hailiang, Meng

    Agent persuasion can improve negotiation efficiency in dynamic environment based on its initiative and autonomy, and etc., which is being affected much more by acquaintance. Classification of acquaintance on agent persuasion is illustrated, and the agent persuasion model of acquaintance is also illustrated. Then the concept of agent persuasion degree of acquaintance is given. Finally, relative interactive mechanism is elaborated.

  16. NIM: A Node Influence Based Method for Cancer Classification

    PubMed Central

    Wang, Yiwen; Yang, Jianhua

    2014-01-01

    The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM) is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART. PMID:25180045

  17. Impact of Information based Classification on Network Epidemics.

    PubMed

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-01-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348

  18. Impact of Information based Classification on Network Epidemics

    NASA Astrophysics Data System (ADS)

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-06-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results.

  19. Impact of Information based Classification on Network Epidemics

    PubMed Central

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-01-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348

  20. Classification of CT-brain slices based on local histograms

    NASA Astrophysics Data System (ADS)

    Avrunin, Oleg G.; Tymkovych, Maksym Y.; Pavlov, Sergii V.; Timchik, Sergii V.; Kisała, Piotr; Orakbaev, Yerbol

    2015-12-01

    Neurosurgical intervention is a very complicated process. Modern operating procedures based on data such as CT, MRI, etc. Automated analysis of these data is an important task for researchers. Some modern methods of brain-slice segmentation use additional data to process these images. Classification can be used to obtain this information. To classify the CT images of the brain, we suggest using local histogram and features extracted from them. The paper shows the process of feature extraction and classification CT-slices of the brain. The process of feature extraction is specialized for axial cross-section of the brain. The work can be applied to medical neurosurgical systems.

  1. Rule-based Cervical Spine Defect Classification Using Medical Narratives.

    PubMed

    Deng, Yihan; Groll, Mathias Jacob; Denecke, Kerstin

    2015-01-01

    Classifying the defects occurring at the cervical spine provides the basis for surgical treatment planning and therapy recommendation. This process requires evidence from patient records. Further, the degree of a defect needs to be encoded in a standardized from to facilitate data exchange and multimodal interoperability. In this paper, a concept for automatic defect classification based on information extracted from textual data of patient records is presented. In a retrospective study, the classifier is applied to clinical documents and the classification results are evaluated. PMID:26262337

  2. Effect of Pansharpened Image on Some of Pixel Based and Object Based Classification Accuracy

    NASA Astrophysics Data System (ADS)

    Karakus, P.; Karabork, H.

    2016-06-01

    Classification is the most important method to determine type of crop contained in a region for agricultural planning. There are two types of the classification. First is pixel based and the other is object based classification method. While pixel based classification methods are based on the information in each pixel, object based classification method is based on objects or image objects that formed by the combination of information from a set of similar pixels. Multispectral image contains a higher degree of spectral resolution than a panchromatic image. Panchromatic image have a higher spatial resolution than a multispectral image. Pan sharpening is a process of merging high spatial resolution panchromatic and high spectral resolution multispectral imagery to create a single high resolution color image. The aim of the study was to compare the potential classification accuracy provided by pan sharpened image. In this study, SPOT 5 image was used dated April 2013. 5m panchromatic image and 10m multispectral image are pan sharpened. Four different classification methods were investigated: maximum likelihood, decision tree, support vector machine at the pixel level and object based classification methods. SPOT 5 pan sharpened image was used to classification sun flowers and corn in a study site located at Kadirli region on Osmaniye in Turkey. The effects of pan sharpened image on classification results were also examined. Accuracy assessment showed that the object based classification resulted in the better overall accuracy values than the others. The results that indicate that these classification methods can be used for identifying sun flower and corn and estimating crop areas.

  3. Exploring cooperation and competition using agent-based modeling

    PubMed Central

    Elliott, Euel; Kiel, L. Douglas

    2002-01-01

    Agent-based modeling enhances our capacity to model competitive and cooperative behaviors at both the individual and group levels of analysis. Models presented in these proceedings produce consistent results regarding the relative fragility of cooperative regimes among agents operating under diverse rules. These studies also show how competition and cooperation may generate change at both the group and societal level. Agent-based simulation of competitive and cooperative behaviors may reveal the greatest payoff to social science research of all agent-based modeling efforts because of the need to better understand the dynamics of these behaviors in an increasingly interconnected world. PMID:12011396

  4. Volatility clustering in agent based market models

    NASA Astrophysics Data System (ADS)

    Giardina, Irene; Bouchaud, Jean-Philippe

    2003-06-01

    We define and study a market model, where agents have different strategies among which they can choose, according to their relative profitability, with the possibility of not participating to the market. The price is updated according to the excess demand, and the wealth of the agents is properly accounted for. Only two parameters play a significant role: one describes the impact of trading on the price, and the other describes the propensity of agents to be trend following or contrarian. We observe three different regimes, depending on the value of these two parameters: an oscillating phase with bubbles and crashes, an intermittent phase and a stable ‘rational’ market phase. The statistics of price changes in the intermittent phase resembles that of real price changes, with small linear correlations, fat tails and long-range volatility clustering. We discuss how the time dependence of these two parameters spontaneously drives the system in the intermittent region.

  5. Reordering based integrative expression profiling for microarray classification

    PubMed Central

    2012-01-01

    Background Current network-based microarray analysis uses the information of interactions among concerned genes/gene products, but still considers each gene expression individually. We propose an organized knowledge-supervised approach - Integrative eXpression Profiling (IXP), to improve microarray classification accuracy, and help discover groups of genes that have been too weak to detect individually by traditional ways. To implement IXP, ant colony optimization reordering (ACOR) algorithm is used to group functionally related genes in an ordered way. Results Using Alzheimer's disease (AD) as an example, we demonstrate how to apply ACOR-based IXP approach into microarray classifications. Using a microarray dataset - GSE1297 with 31 samples as training set, the result for the blinded classification on another microarray dataset - GSE5281 with 151 samples, shows that our approach can improve accuracy from 74.83% to 82.78%. A recently-published 1372-probe signature for AD can only achieve 61.59% accuracy in the same condition. The ACOR-based IXP approach also has better performance than the IXP approach based on classic network ranking, graph clustering, and random-ordering methods in an overall classification performance comparison. Conclusions The ACOR-based IXP approach can serve as a knowledge-supervised feature transformation approach to increase classification accuracy dramatically, by transforming each gene expression profile to an integrated expression files as features inputting into standard classifiers. The IXP approach integrates both gene expression information and organized knowledge - disease gene/protein network topology information, which is represented as both network node weights (local topological properties) and network node orders (global topological characteristics). PMID:22536860

  6. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  7. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  8. Risk-based Classification of Incidents

    NASA Technical Reports Server (NTRS)

    Greenwell, William S.; Knight, John C.; Strunk, Elisabeth A.

    2003-01-01

    As the penetration of software into safety-critical systems progresses, accidents and incidents involving software will inevitably become more frequent. Identifying lessons from these occurrences and applying them to existing and future systems is essential if recurrences are to be prevented. Unfortunately, investigative agencies do not have the resources to fully investigate every incident under their jurisdictions and domains of expertise and thus must prioritize certain occurrences when allocating investigative resources. In the aviation community, most investigative agencies prioritize occurrences based on the severity of their associated losses, allocating more resources to accidents resulting in injury to passengers or extensive aircraft damage. We argue that this scheme is inappropriate because it undervalues incidents whose recurrence could have a high potential for loss while overvaluing fairly straightforward accidents involving accepted risks. We then suggest a new strategy for prioritizing occurrences based on the risk arising from incident recurrence.

  9. An Extension Dynamic Model Based on BDI Agent

    NASA Astrophysics Data System (ADS)

    Yu, Wang; Feng, Zhu; Hua, Geng; WangJing, Zhu

    this paper's researching is based on the model of BDI Agent. Firstly, This paper analyze the deficiencies of the traditional BDI Agent model, Then propose an extension dynamic model of BDI Agent based on the traditional ones. It can quickly achieve the internal interaction of the tradition model of BDI Agent, deal with complex issues under dynamic and open environment and achieve quick reaction of the model. The new model is a natural and reasonable model by verifying the origin of civilization using the model of monkeys to eat sweet potato based on the design of the extension dynamic model. It is verified to be feasible by comparing the extended dynamic BDI Agent model with the traditional BDI Agent Model uses the SWARM, it has important theoretical significance.

  10. Competency Based Curriculum for Real Estate Agent.

    ERIC Educational Resources Information Center

    McCloy, Robert J.

    This publication is a curriculum and teaching guide for preparing real estate agents in the state of West Virginia. The guide contains 30 units, or lessons. Each lesson is designed to cover three to five hours of instruction time. Competencies provided for each lesson are stated in terms of what the student should be able to do as a result of the…

  11. AGENT-BASED MODELING OF INDUSTRIAL ECOSYSTEMS

    EPA Science Inventory

    The objectives of this research are to investigate behavioral and organizational questions associated with environmental regulation of firms, and to test specifically whether a bottom-up approach that highlights principal-agent problems offers new insights and empirical validi...

  12. An Agent-Based Cockpit Task Management System

    NASA Technical Reports Server (NTRS)

    Funk, Ken

    1997-01-01

    An agent-based program to facilitate Cockpit Task Management (CTM) in commercial transport aircraft is developed and evaluated. The agent-based program called the AgendaManager (AMgr) is described and evaluated in a part-task simulator study using airline pilots.

  13. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    NASA Astrophysics Data System (ADS)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

  14. A Visual mining based framework for classification accuracy estimation

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal Vijayakumar

    2013-12-01

    Classification techniques have been widely used in different remote sensing applications and correct classification of mixed pixels is a tedious task. Traditional approaches adopt various statistical parameters, however does not facilitate effective visualisation. Data mining tools are proving very helpful in the classification process. We propose a visual mining based frame work for accuracy assessment of classification techniques using open source tools such as WEKA and PREFUSE. These tools in integration can provide an efficient approach for getting information about improvements in the classification accuracy and helps in refining training data set. We have illustrated framework for investigating the effects of various resampling methods on classification accuracy and found that bilinear (BL) is best suited for preserving radiometric characteristics. We have also investigated the optimal number of folds required for effective analysis of LISS-IV images. Techniki klasyfikacji są szeroko wykorzystywane w różnych aplikacjach teledetekcyjnych, w których poprawna klasyfikacja pikseli stanowi poważne wyzwanie. Podejście tradycyjne wykorzystujące różnego rodzaju parametry statystyczne nie zapewnia efektywnej wizualizacji. Wielce obiecujące wydaje się zastosowanie do klasyfikacji narzędzi do eksploracji danych. W artykule zaproponowano podejście bazujące na wizualnej analizie eksploracyjnej, wykorzystujące takie narzędzia typu open source jak WEKA i PREFUSE. Wymienione narzędzia ułatwiają korektę pół treningowych i efektywnie wspomagają poprawę dokładności klasyfikacji. Działanie metody sprawdzono wykorzystując wpływ różnych metod resampling na zachowanie dokładności radiometrycznej i uzyskując najlepsze wyniki dla metody bilinearnej (BL).

  15. Segmentation Based Fuzzy Classification of High Resolution Images

    NASA Astrophysics Data System (ADS)

    Rao, Mukund; Rao, Suryaprakash; Masser, Ian; Kasturirangan, K.

    Information extraction from satellite images is the process of delineation of entities in the image which pertain to some feature on the earth and to which on associating an attribute, a classification of the image is obtained. Classification is a common technique to extract information from remote sensing data and, by and large, the common classification techniques mainly exploit the spectral characteristics of remote sensing images and attempt to detect patterns in spectral information to classify images. These are based on a per-pixel analysis of the spectral information, "clustering" or "grouping" of pixels is done to generate meaningful thematic information. Most of the classification techniques apply statistical pattern recognition of image spectral vectors to "label" each pixel with appropriate class information from a set of training information. On the other hand, Segmentation is not new, but it is yet seldom used in image processing of remotely sensed data. Although there has been a lot of development in segmentation of grey tone images in this field and other fields, like robotic vision, there has been little progress in segmentation of colour or multi-band imagery. Especially within the last two years many new segmentation algorithms as well as applications were developed, but not all of them lead to qualitatively convincing results while being robust and operational. One reason is that the segmentation of an image into a given number of regions is a problem with a huge number of possible solutions. Newer algorithms based on fractal approach could eventually revolutionize image processing of remotely sensed data. The paper looks at applying spatial concepts to image processing, paving the way to algorithmically formulate some more advanced aspects of cognition and inference. In GIS-based spatial analysis, vector-based tools already have been able to support advanced tasks generating new knowledge. By identifying objects (as segmentation results) from

  16. Detection/classification/quantification of chemical agents using an array of surface acoustic wave (SAW) devices

    NASA Astrophysics Data System (ADS)

    Milner, G. Martin

    2005-05-01

    ChemSentry is a portable system used to detect, identify, and quantify chemical warfare (CW) agents. Electro chemical (EC) cell sensor technology is used for blood agents and an array of surface acoustic wave (SAW) sensors is used for nerve and blister agents. The combination of the EC cell and the SAW array provides sufficient sensor information to detect, classify and quantify all CW agents of concern using smaller, lighter, lower cost units. Initial development of the SAW array and processing was a key challenge for ChemSentry requiring several years of fundamental testing of polymers and coating methods to finalize the sensor array design in 2001. Following the finalization of the SAW array, nearly three (3) years of intensive testing in both laboratory and field environments were required in order to gather sufficient data to fully understand the response characteristics. Virtually unbounded permutations of agent characteristics and environmental characteristics must be considered in order to operate against all agents and all environments of interest to the U.S. military and other potential users of ChemSentry. The resulting signal processing design matched to this extensive body of measured data (over 8,000 agent challenges and 10,000 hours of ambient data) is considered to be a significant advance in state-of-the-art for CW agent detection.

  17. Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems

    ERIC Educational Resources Information Center

    Gifford, Christopher M.

    2009-01-01

    This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…

  18. Classification of Regional Ionospheric Disturbances Based on Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Begüm Terzi, Merve; Arikan, Feza; Arikan, Orhan; Karatay, Secil

    2016-07-01

    Ionosphere is an anisotropic, inhomogeneous, time varying and spatio-temporally dispersive medium whose parameters can be estimated almost always by using indirect measurements. Geomagnetic, gravitational, solar or seismic activities cause variations of ionosphere at various spatial and temporal scales. This complex spatio-temporal variability is challenging to be identified due to extensive scales in period, duration, amplitude and frequency of disturbances. Since geomagnetic and solar indices such as Disturbance storm time (Dst), F10.7 solar flux, Sun Spot Number (SSN), Auroral Electrojet (AE), Kp and W-index provide information about variability on a global scale, identification and classification of regional disturbances poses a challenge. The main aim of this study is to classify the regional effects of global geomagnetic storms and classify them according to their risk levels. For this purpose, Total Electron Content (TEC) estimated from GPS receivers, which is one of the major parameters of ionosphere, will be used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. In this work, for the automated classification of the regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. SVM is a supervised learning model used for classification with associated learning algorithm that analyze the data and recognize patterns. In addition to performing linear classification, SVM can efficiently perform nonlinear classification by embedding data into higher dimensional feature spaces. Performance of the developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from the GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing the developed classification

  19. A Classification of Mediterranean Cyclones Based on Global Analyses

    NASA Technical Reports Server (NTRS)

    Reale, Oreste; Atlas, Robert

    2003-01-01

    The Mediterranean Sea region is dominated by baroclinic and orographic cyclogenesis. However, previous work has demonstrated the existence of rare but intense subsynoptic-scale cyclones displaying remarkable similarities to tropical cyclones and polar lows, including, but not limited to, an eye-like feature in the satellite imagery. The terms polar low and tropical cyclone have been often used interchangeably when referring to small-scale, convective Mediterranean vortices and no definitive statement has been made so far on their nature, be it sub-tropical or polar. Moreover, most of the classifications of Mediterranean cyclones have neglected the small-scale convective vortices, focusing only on the larger-scale and far more common baroclinic cyclones. A classification of all Mediterranean cyclones based on operational global analyses is proposed The classification is based on normalized horizontal shear, vertical shear, scale, low versus mid-level vorticity, low-level temperature gradients, and sea surface temperatures. In the classification system there is a continuum of possible events, according to the increasing role of barotropic instability and decreasing role of baroclinic instability. One of the main results is that the Mediterranean tropical cyclone-like vortices and the Mediterranean polar lows appear to be different types of events, in spite of the apparent similarity of their satellite imagery. A consistent terminology is adopted, stating that tropical cyclone- like vortices are the less baroclinic of all, followed by polar lows, cold small-scale cyclones and finally baroclinic lee cyclones. This classification is based on all the cyclones which occurred in a four-year period (between 1996 and 1999). Four cyclones, selected among all the ones which developed during this time-frame, are analyzed. Particularly, the classification allows to discriminate between two cyclones (occurred in October 1996 and in March 1999) which both display a very well

  20. Bayesian outcome-based strategy classification.

    PubMed

    Lee, Michael D

    2016-03-01

    Hilbig and Moshagen (Psychonomic Bulletin & Review, 21, 1431-1443, 2014) recently developed a method for making inferences about the decision processes people use in multi-attribute forced choice tasks. Their paper makes a number of worthwhile theoretical and methodological contributions. Theoretically, they provide an insightful psychological motivation for a probabilistic extension of the widely-used "weighted additive" (WADD) model, and show how this model, as well as other important models like "take-the-best" (TTB), can and should be expressed in terms of meaningful priors. Methodologically, they develop an inference approach based on the Minimum Description Length (MDL) principles that balances both the goodness-of-fit and complexity of the decision models they consider. This paper aims to preserve these useful contributions, but provide a complementary Bayesian approach with some theoretical and methodological advantages. We develop a simple graphical model, implemented in JAGS, that allows for fully Bayesian inferences about which models people use to make decisions. To demonstrate the Bayesian approach, we apply it to the models and data considered by Hilbig and Moshagen (Psychonomic Bulletin & Review, 21, 1431-1443, 2014), showing how a prior predictive analysis of the models, and posterior inferences about which models people use and the parameter settings at which they use them, can contribute to our understanding of human decision making. PMID:25697091

  1. Similarity-Based Classification in Partially Labeled Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Ming; Shang, Ming-Sheng; Lü, Linyuan

    Two main difficulties in the problem of classification in partially labeled networks are the sparsity of the known labeled nodes and inconsistency of label information. To address these two difficulties, we propose a similarity-based method, where the basic assumption is that two nodes are more likely to be categorized into the same class if they are more similar. In this paper, we introduce ten similarity indices defined based on the network structure. Empirical results on the co-purchase network of political books show that the similarity-based method can, to some extent, overcome these two difficulties and give higher accurate classification than the relational neighbors method, especially when the labeled nodes are sparse. Furthermore, we find that when the information of known labeled nodes is sufficient, the indices considering only local information can perform as good as those global indices while having much lower computational complexity.

  2. Object-Based Classification and Change Detection of Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Park, J. G.; Harada, I.; Kwak, Y.

    2016-06-01

    Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.

  3. Networks based on collisions among mobile agents

    NASA Astrophysics Data System (ADS)

    González, Marta C.; Lind, Pedro G.; Herrmann, Hans J.

    2006-12-01

    We investigate in detail a recent model of colliding mobile agents [M.C. González, P.G. Lind, H.J. Herrmann, Phys. Rev. Lett. 96 (2006) 088702. cond-mat/0602091], used as an alternative approach for constructing evolving networks of interactions formed by collisions governed by suitable dynamical rules. The system of mobile agents evolves towards a quasi-stationary state which is, apart from small fluctuations, well characterized by the density of the system and the residence time of the agents. The residence time defines a collision rate, and by varying this collision rate, the system percolates at a critical value, with the emergence of a giant cluster whose critical exponents are the ones of two-dimensional percolation. Further, the degree and clustering coefficient distributions, and the average path length, show that the network associated with such a system presents non-trivial features which, depending on the collision rules, enables one not only to recover the main properties of standard networks, such as exponential, random and scale-free networks, but also to obtain other topological structures. To illustrate, we show a specific example where the obtained structure has topological features which characterize the structure and evolution of social networks accurately in different contexts, ranging from networks of acquaintances to networks of sexual contacts.

  4. Character-based DNA barcoding: a superior tool for species classification.

    PubMed

    Bergmann, Tjard; Hadrys, Heike; Breves, Gerhard; Schierwater, Bernd

    2009-01-01

    In zoonosis research only correct assigned host-agent-vector associations can lead to success. If most biological species on Earth, from agent to host and from procaryotes to vertebrates, are still undetected, the development of a reliable and universal diversity detection tool becomes a conditio sine qua non. In this context, in breathtaking speed, modern molecular-genetic techniques have become acknowledged tools for the classification of life forms at all taxonomic levels. While previous DNA-barcoding techniques were criticised for several reasons (Moritz and Cicero, 2004; Rubinoff et al., 2006a, b; Rubinoff, 2006; Rubinoff and Haines, 2006) a new approach, the so called CAOS-barcoding (Character Attribute Organisation System), avoids most of the weak points. Traditional DNA-barcoding approaches are based on distances, i. e. they use genetic distances and tree construction algorithms for the classification of species or lineages. The definition of limit values is enforced and prohibits a discrete or clear assignment. In comparison, the new character-based barcoding (CAOS-barcoding; DeSalle et al., 2005; DeSalle, 2006; Rach et al., 2008) works with discrete single characters and character combinations which permits a clear, unambiguous classification. In Hannover (Germany) we are optimising this system and developing a semiautomatic high-throughput procedure for hosts, agents and vectors being studied within the Zoonosis Centre of the "Stiftung Tierärztliche Hochschule Hannover". Our primary research is concentrated on insects, the most successful and species-rich animal group on Earth (every fourth animal is a bug). One subgroup, the winged insects (Pterygota), represents the outstanding majority of all zoonosis relevant animal vectors. PMID:19999380

  5. Proposed Classification of Auriculotemporal Nerve, Based on the Root System

    PubMed Central

    Komarnitki, Iulian; Tomczyk, Jacek; Ciszek, Bogdan; Zalewska, Marta

    2015-01-01

    The topography of the auriculotemporal nerve (ATN) root system is the main criterion of this nerve classification. Previous publications indicate that ATN may have between one and five roots. Most common is a one- or two-root variant of the nerve structure. The problem of many publications is the inconsistency of nomenclature which concerns the terms “roots”, “connecting branches”, or “branches” that are used to identify the same structures. This study was performed on 80 specimens (40 adults and 40 fetuses) to propose a classification based on: (i) the number of roots, (ii) way of root division, and (iii) configuration of interradicular fibers that form the ATN trunk. This new classification is a remedy for inconsistency of nomenclature of ATN in the infratemporal fossa. This classification system has proven beneficial when organizing all ATN variants described in previous studies and could become a helpful tool for surgeons and dentists. Examination of ATN from the infratemporal fossa of fetuses (the youngest was at 18 weeks gestational age) showed that, at that stage, the nerve is fully developed. PMID:25856464

  6. Improving representation-based classification for robust face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhi; Zhang, Zheng; Li, Zhengming; Chen, Yan; Shi, Jian

    2014-06-01

    The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier.

  7. Metagenome fragment classification based on multiple motif-occurrence profiles.

    PubMed

    Matsushita, Naoki; Seno, Shigeto; Takenaka, Yoichi; Matsuda, Hideo

    2014-01-01

    A vast amount of metagenomic data has been obtained by extracting multiple genomes simultaneously from microbial communities, including genomes from uncultivable microbes. By analyzing these metagenomic data, novel microbes are discovered and new microbial functions are elucidated. The first step in analyzing these data is sequenced-read classification into reference genomes from which each read can be derived. The Naïve Bayes Classifier is a method for this classification. To identify the derivation of the reads, this method calculates a score based on the occurrence of a DNA sequence motif in each reference genome. However, large differences in the sizes of the reference genomes can bias the scoring of the reads. This bias might cause erroneous classification and decrease the classification accuracy. To address this issue, we have updated the Naïve Bayes Classifier method using multiple sets of occurrence profiles for each reference genome by normalizing the genome sizes, dividing each genome sequence into a set of subsequences of similar length and generating profiles for each subsequence. This multiple profile strategy improves the accuracy of the results generated by the Naïve Bayes Classifier method for simulated and Sargasso Sea datasets. PMID:25210663

  8. Structure-based classification and ontology in chemistry

    PubMed Central

    2012-01-01

    Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic

  9. Tutorial on agent-based modeling and simulation. Part 2 : how to model with agents.

    SciTech Connect

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2006-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of interacting autonomous agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to do research. Some have gone so far as to contend that ABMS is a new way of doing science. Computational advances make possible a growing number of agent-based applications across many fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling the growth and decline of ancient civilizations to modeling the complexities of the human immune system, and many more. This tutorial describes the foundations of ABMS, identifies ABMS toolkits and development methods illustrated through a supply chain example, and provides thoughts on the appropriate contexts for ABMS versus conventional modeling techniques.

  10. An AERONET-based aerosol classification using the Mahalanobis distance

    NASA Astrophysics Data System (ADS)

    Hamill, Patrick; Giordano, Marco; Ward, Carolyne; Giles, David; Holben, Brent

    2016-09-01

    We present an aerosol classification based on AERONET aerosol data from 1993 to 2012. We used the AERONET Level 2.0 almucantar aerosol retrieval products to define several reference aerosol clusters which are characteristic of the following general aerosol types: Urban-Industrial, Biomass Burning, Mixed Aerosol, Dust, and Maritime. The classification of a particular aerosol observation as one of these aerosol types is determined by its five-dimensional Mahalanobis distance to each reference cluster. We have calculated the fractional aerosol type distribution at 190 AERONET sites, as well as the monthly variation in aerosol type at those locations. The results are presented on a global map and individually in the supplementary material. Our aerosol typing is based on recognizing that different geographic regions exhibit characteristic aerosol types. To generate reference clusters we only keep data points that lie within a Mahalanobis distance of 2 from the centroid. Our aerosol characterization is based on the AERONET retrieved quantities, therefore it does not include low optical depth values. The analysis is based on "point sources" (the AERONET sites) rather than globally distributed values. The classifications obtained will be useful in interpreting aerosol retrievals from satellite borne instruments.

  11. A Sieving ANN for Emotion-Based Movie Clip Classification

    NASA Astrophysics Data System (ADS)

    Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon

    Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.

  12. Geometric nomenclature and classification of RNA base pairs.

    PubMed Central

    Leontis, N B; Westhof, E

    2001-01-01

    Non-Watson-Crick base pairs mediate specific interactions responsible for RNA-RNA self-assembly and RNA-protein recognition. An unambiguous and descriptive nomenclature with well-defined and nonoverlapping parameters is needed to communicate concisely structural information about RNA base pairs. The definitions should reflect underlying molecular structures and interactions and, thus, facilitate automated annotation, classification, and comparison of new RNA structures. We propose a classification based on the observation that the planar edge-to-edge, hydrogen-bonding interactions between RNA bases involve one of three distinct edges: the Watson-Crick edge, the Hoogsteen edge, and the Sugar edge (which includes the 2'-OH and which has also been referred to as the Shallow-groove edge). Bases can interact in either of two orientations with respect to the glycosidic bonds, cis or trans relative to the hydrogen bonds. This gives rise to 12 basic geometric types with at least two H bonds connecting the bases. For each geometric type, the relative orientations of the strands can be easily deduced. High-resolution examples of 11 of the 12 geometries are presently available. Bifurcated pairs, in which a single exocyclic carbonyl or amino group of one base directly contacts the edge of a second base, and water-inserted pairs, in which single functional groups on each base interact directly, are intermediate between two of the standard geometries. The nomenclature facilitates the recognition of isosteric relationships among base pairs within each geometry, and thus facilitates the recognition of recurrent three-dimensional motifs from comparison of homologous sequences. Graphical conventions are proposed for displaying non-Watson-Crick interactions on a secondary structure diagram. The utility of the classification in homology modeling of RNA tertiary motifs is illustrated. PMID:11345429

  13. Towards an agent-oriented programming language based on Scala

    NASA Astrophysics Data System (ADS)

    Mitrović, Dejan; Ivanović, Mirjana; Budimac, Zoran

    2012-09-01

    Scala and its multi-threaded model based on actors represent an excellent framework for developing purely reactive agents. This paper presents an early research on extending Scala with declarative programming constructs, which would result in a new agent-oriented programming language suitable for developing more advanced, BDI agent architectures. The main advantage the new language over many other existing solutions for programming BDI agents is a natural and straightforward integration of imperative and declarative programming constructs, fitted under a single development framework.

  14. Access Control for Agent-based Computing: A Distributed Approach.

    ERIC Educational Resources Information Center

    Antonopoulos, Nick; Koukoumpetsos, Kyriakos; Shafarenko, Alex

    2001-01-01

    Discusses the mobile software agent paradigm that provides a foundation for the development of high performance distributed applications and presents a simple, distributed access control architecture based on the concept of distributed, active authorization entities (lock cells), any combination of which can be referenced by an agent to provide…

  15. Expected energy-based restricted Boltzmann machine for classification.

    PubMed

    Elfwing, S; Uchibe, E; Doya, K

    2015-04-01

    In classification tasks, restricted Boltzmann machines (RBMs) have predominantly been used in the first stage, either as feature extractors or to provide initialization of neural networks. In this study, we propose a discriminative learning approach to provide a self-contained RBM method for classification, inspired by free-energy based function approximation (FE-RBM), originally proposed for reinforcement learning. For classification, the FE-RBM method computes the output for an input vector and a class vector by the negative free energy of an RBM. Learning is achieved by stochastic gradient-descent using a mean-squared error training objective. In an earlier study, we demonstrated that the performance and the robustness of FE-RBM function approximation can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that the learning performance of RBM function approximation can be further improved by computing the output by the negative expected energy (EE-RBM), instead of the negative free energy. To create a deep learning architecture, we stack several RBMs on top of each other. We also connect the class nodes to all hidden layers to try to improve the performance even further. We validate the classification performance of EE-RBM using the MNIST data set and the NORB data set, achieving competitive performance compared with other classifiers such as standard neural networks, deep belief networks, classification RBMs, and support vector machines. The purpose of using the NORB data set is to demonstrate that EE-RBM with binary input nodes can achieve high performance in the continuous input domain. PMID:25318375

  16. Exploring complex dynamics in multi agent-based intelligent systems: Theoretical and experimental approaches using the Multi Agent-based Behavioral Economic Landscape (MABEL) model

    NASA Astrophysics Data System (ADS)

    Alexandridis, Konstantinos T.

    This dissertation adopts a holistic and detailed approach to modeling spatially explicit agent-based artificial intelligent systems, using the Multi Agent-based Behavioral Economic Landscape (MABEL) model. The research questions that addresses stem from the need to understand and analyze the real-world patterns and dynamics of land use change from a coupled human-environmental systems perspective. Describes the systemic, mathematical, statistical, socio-economic and spatial dynamics of the MABEL modeling framework, and provides a wide array of cross-disciplinary modeling applications within the research, decision-making and policy domains. Establishes the symbolic properties of the MABEL model as a Markov decision process, analyzes the decision-theoretic utility and optimization attributes of agents towards comprising statistically and spatially optimal policies and actions, and explores the probabilogic character of the agents' decision-making and inference mechanisms via the use of Bayesian belief and decision networks. Develops and describes a Monte Carlo methodology for experimental replications of agent's decisions regarding complex spatial parcel acquisition and learning. Recognizes the gap on spatially-explicit accuracy assessment techniques for complex spatial models, and proposes an ensemble of statistical tools designed to address this problem. Advanced information assessment techniques such as the Receiver-Operator Characteristic curve, the impurity entropy and Gini functions, and the Bayesian classification functions are proposed. The theoretical foundation for modular Bayesian inference in spatially-explicit multi-agent artificial intelligent systems, and the ensembles of cognitive and scenario assessment modular tools build for the MABEL model are provided. Emphasizes the modularity and robustness as valuable qualitative modeling attributes, and examines the role of robust intelligent modeling as a tool for improving policy-decisions related to land

  17. A proposed classification scheme for Ada-based software products

    NASA Technical Reports Server (NTRS)

    Cernosek, Gary J.

    1986-01-01

    As the requirements for producing software in the Ada language become a reality for projects such as the Space Station, a great amount of Ada-based program code will begin to emerge. Recognizing the potential for varying levels of quality to result in Ada programs, what is needed is a classification scheme that describes the quality of a software product whose source code exists in Ada form. A 5-level classification scheme is proposed that attempts to decompose this potentially broad spectrum of quality which Ada programs may possess. The number of classes and their corresponding names are not as important as the mere fact that there needs to be some set of criteria from which to evaluate programs existing in Ada. An exact criteria for each class is not presented, nor are any detailed suggestions of how to effectively implement this quality assessment. The idea of Ada-based software classification is introduced and a set of requirements from which to base further research and development is suggested.

  18. An entropy-based classification scheme of meandering rivers

    NASA Astrophysics Data System (ADS)

    Abad, J. D.; Gutierrez, R. R.

    2015-12-01

    Some researchers have highlighted the fact that most of the river classification schemes have not evolved at the same pace as river morphodynamics models have done it. The most prevailing classification scheme of meandering river was proposed by Brice (1975) and is mainly based on observational criteria. Likewise, thermodynamics principles have been applied on geomorphology over a relatively long period of time. Thus, for instance, a strong analogy between meander angle of deflection and the distribution of momentum in gas dynamics has been identified. Based on the analysis of curvature data from 16 natural meanders (which totals 52 realizations) ranging from class B to class G related to the Brice classification scheme, we propose a two-parameter meandering classification schemen, namely: [1] the yearly Shannon wavelet based negentropy gradient (ΔSWT), and [2] a quantitative continuum of the degree of confinement, which is estimated from the dimensonless Frechet distance (δF*) between the meandering centerline curvature and that of the mean center. Our results show that δF* identifies a threshold of ˜650 to discriminate freely from confined rivers; thereby, scales of the second and third degree of confinement are quantified. Likewise, the proxy parameter ΔSWT suggests that there are 4 degrees of meandering morphodynamics which lay in the intervals [10-1-100], [100-101], [101-102], and [102-103]. Our results also suggest that the lower negentropy corresponds to G1 meanders (two phase, bimodal bankfull sinuosity, equiwidth) and class B2 (single phase, wider at bends, no bars). Class G2 (two phase, bimodal bankfull sinuosity, wider at bends with point bars) and class C (single phase wider a bends, no bars) exhibit higher negentropy (single phase wider at bends width point bars). Likewise, the middle-negentropy group is comprised by both confined meanders (B1, single phase and equiwidth channel, and D, single phase, wider at bends with point bars and chutes) and

  19. Resource-efficient wireless monitoring based on mobile agent migration

    NASA Astrophysics Data System (ADS)

    Smarsly, Kay; Law, Kincho H.; König, Markus

    2011-04-01

    Wireless sensor networks are increasingly adopted in many engineering applications such as environmental and structural monitoring. Having proven to be low-cost, easy to install and accurate, wireless sensor networks serve as a powerful alternative to traditional tethered monitoring systems. However, due to the limited resources of a wireless sensor node, critical problems are the power-consuming transmission of the collected sensor data and the usage of onboard memory of the sensor nodes. This paper presents a new approach towards resource-efficient wireless sensor networks based on a multi-agent paradigm. In order to efficiently use the restricted computing resources, software agents are embedded in the wireless sensor nodes. On-board agents are designed to autonomously collect, analyze and condense the data sets using relatively simple yet resource-efficient algorithms. If having detected (potential) anomalies in the observed structural system, the on-board agents explicitly request specialized software agents. These specialized agents physically migrate from connected computer systems, or adjacent nodes, to the respective sensor node in order to perform more complex damage detection analyses based on their inherent expert knowledge. A prototype system is designed and implemented, deploying multi-agent technology and dynamic code migration, in a wireless sensor network for structural health monitoring. Laboratory tests are conducted to validate the performance of the agent-based wireless structural health monitoring system and to verify its autonomous damage detection capabilities.

  20. A simulation-based tutor that reasons about multiple agents

    SciTech Connect

    Rhodes Eliot, C. III; Park Woolf, B.

    1996-12-31

    This paper examines the problem of modeling multiple agents within an intelligent simulation-based tutor. Multiple agent and planning technology were used to enable the system to critique a human agent`s reasoning about multiple agents. This perspective arises naturally whenever a student must learn to lead and coordinate a team of people. The system dynamically selected teaching goals, instantiated plans and modeled the student and the domain as it monitored the student`s progress. The tutor provides one of the first complete integrations of a real-time simulation with knowledge-based reasoning. Other novel techniques of the system are reported, such as common-sense reasoning about plans, reasoning about protocol mechanisms, and using a real-time simulation for training.

  1. GECC: Gene Expression Based Ensemble Classification of Colon Samples.

    PubMed

    Rathore, Saima; Hussain, Mutawarra; Khan, Asifullah

    2014-01-01

    Gene expression deviates from its normal composition in case a patient has cancer. This variation can be used as an effective tool to find cancer. In this study, we propose a novel gene expressions based colon classification scheme (GECC) that exploits the variations in gene expressions for classifying colon gene samples into normal and malignant classes. Novelty of GECC is in two complementary ways. First, to cater overwhelmingly larger size of gene based data sets, various feature extraction strategies, like, chi-square, F-Score, principal component analysis (PCA) and minimum redundancy and maximum relevancy (mRMR) have been employed, which select discriminative genes amongst a set of genes. Second, a majority voting based ensemble of support vector machine (SVM) has been proposed to classify the given gene based samples. Previously, individual SVM models have been used for colon classification, however, their performance is limited. In this research study, we propose an SVM-ensemble based new approach for gene based classification of colon, wherein the individual SVM models are constructed through the learning of different SVM kernels, like, linear, polynomial, radial basis function (RBF), and sigmoid. The predicted results of individual models are combined through majority voting. In this way, the combined decision space becomes more discriminative. The proposed technique has been tested on four colon, and several other binary-class gene expression data sets, and improved performance has been achieved compared to previously reported gene based colon cancer detection techniques. The computational time required for the training and testing of 208 × 5,851 data set has been 591.01 and 0.019 s, respectively. PMID:26357050

  2. A science based approach to topical drug classification system (TCS).

    PubMed

    Shah, Vinod P; Yacobi, Avraham; Rădulescu, Flavian Ştefan; Miron, Dalia Simona; Lane, Majella E

    2015-08-01

    The Biopharmaceutics Classification System (BCS) for oral immediate release solid drug products has been very successful; its implementation in drug industry and regulatory approval has shown significant progress. This has been the case primarily because BCS was developed using sound scientific judgment. Following the success of BCS, we have considered the topical drug products for similar classification system based on sound scientific principles. In USA, most of the generic topical drug products have qualitatively (Q1) and quantitatively (Q2) same excipients as the reference listed drug (RLD). The applications of in vitro release (IVR) and in vitro characterization are considered for a range of dosage forms (suspensions, creams, ointments and gels) of differing strengths. We advance a Topical Drug Classification System (TCS) based on a consideration of Q1, Q2 as well as the arrangement of matter and microstructure of topical formulations (Q3). Four distinct classes are presented for the various scenarios that may arise and depending on whether biowaiver can be granted or not. PMID:26070249

  3. The DTW-based representation space for seismic pattern classification

    NASA Astrophysics Data System (ADS)

    Orozco-Alzate, Mauricio; Castro-Cabrera, Paola Alexandra; Bicego, Manuele; Londoño-Bonilla, John Makario

    2015-12-01

    Distinguishing among the different seismic volcanic patterns is still one of the most important and labor-intensive tasks for volcano monitoring. This task could be lightened and made free from subjective bias by using automatic classification techniques. In this context, a core but often overlooked issue is the choice of an appropriate representation of the data to be classified. Recently, it has been suggested that using a relative representation (i.e. proximities, namely dissimilarities on pairs of objects) instead of an absolute one (i.e. features, namely measurements on single objects) is advantageous to exploit the relational information contained in the dissimilarities to derive highly discriminant vector spaces, where any classifier can be used. According to that motivation, this paper investigates the suitability of a dynamic time warping (DTW) dissimilarity-based vector representation for the classification of seismic patterns. Results show the usefulness of such a representation in the seismic pattern classification scenario, including analyses of potential benefits from recent advances in the dissimilarity-based paradigm such as the proper selection of representation sets and the combination of different dissimilarity representations that might be available for the same data.

  4. Changing Histopathological Diagnostics by Genome-Based Tumor Classification

    PubMed Central

    Kloth, Michael; Buettner, Reinhard

    2014-01-01

    Traditionally, tumors are classified by histopathological criteria, i.e., based on their specific morphological appearances. Consequently, current therapeutic decisions in oncology are strongly influenced by histology rather than underlying molecular or genomic aberrations. The increase of information on molecular changes however, enabled by the Human Genome Project and the International Cancer Genome Consortium as well as the manifold advances in molecular biology and high-throughput sequencing techniques, inaugurated the integration of genomic information into disease classification. Furthermore, in some cases it became evident that former classifications needed major revision and adaption. Such adaptations are often required by understanding the pathogenesis of a disease from a specific molecular alteration, using this molecular driver for targeted and highly effective therapies. Altogether, reclassifications should lead to higher information content of the underlying diagnoses, reflecting their molecular pathogenesis and resulting in optimized and individual therapeutic decisions. The objective of this article is to summarize some particularly important examples of genome-based classification approaches and associated therapeutic concepts. In addition to reviewing disease specific markers, we focus on potentially therapeutic or predictive markers and the relevance of molecular diagnostics in disease monitoring. PMID:24879454

  5. Image classification based on region of interest detection

    NASA Astrophysics Data System (ADS)

    Zhou, Huabing; Zhang, Yanduo; Yu, Zhenghong

    2015-12-01

    For image classification tasks, the region containing object which plays a decisive role is indefinite in both position and scale. In this case, it does not seem quite appropriate to use the spatial pyramid matching (SPM) approach directly. In this paper, we describe an approach for handling this problem based on region of interest (ROI) detection. It verifies the feasibility of using a state-of-the-art object detection algorithm to separate foreground and background for image classification. It first makes use of an object detection algorithm to separate an image into object and scene regions, and then constructs spatial histogram features for them separately based on SPM. Moreover, the detection score is used to rescore. Our contributions include: i) verify the feasibility of using a state-of-the-art object detection algorithm to separate foreground and background used for image classification; ii) a simple method, called coarse object alignment matching, is proposed for constructing histogram using the foreground and background provided by object localization. Experimental results demonstrate an obvious superiority of our approach over the standard SPM method, and it also outperforms many state-of-the-art methods for several categories.

  6. Perceptually based techniques for semantic image classification and retrieval

    NASA Astrophysics Data System (ADS)

    Depalov, Dejan; Pappas, Thrasyvoulos; Li, Dongge; Gandhi, Bhavan

    2006-02-01

    The accumulation of large collections of digital images has created the need for efficient and intelligent schemes for content-based image retrieval. Our goal is to organize the contents semantically, according to meaningful categories. We present a new approach for semantic classification that utilizes a recently proposed color-texture segmentation algorithm (by Chen et al.), which combines knowledge of human perception and signal characteristics to segment natural scenes into perceptually uniform regions. The color and texture features of these regions are used as medium level descriptors, based on which we extract semantic labels, first at the segment and then at the scene level. The segment features consist of spatial texture orientation information and color composition in terms of a limited number of locally adapted dominant colors. The focus of this paper is on region classification. We use a hierarchical vocabulary of segment labels that is consistent with those used in the NIST TRECVID 2003 development set. We test the approach on a database of 9000 segments obtained from 2500 photographs of natural scenes. For training and classification we use the Linear Discriminant Analysis (LDA) technique. We examine the performance of the algorithm (precision and recall rates) when different sets of features (e.g., one or two most dominant colors versus four quantized dominant colors) are used. Our results indicate that the proposed approach offers significant performance improvements over existing approaches.

  7. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements. PMID:24778587

  8. Simple-Random-Sampling-Based Multiclass Text Classification Algorithm

    PubMed Central

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements. PMID:24778587

  9. An ellipse detection algorithm based on edge classification

    NASA Astrophysics Data System (ADS)

    Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.

  10. Agent-based method for distributed clustering of textual information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  11. A comprehensive classification of nucleic acid structural families based on strand direction and base pairing.

    PubMed Central

    Lavery, R; Zakrzewska, K; Sun, J S; Harvey, S C

    1992-01-01

    We propose a classification of DNA structures formed from 1 to 4 strands, based only on relative strand directions, base to strand orientation and base pairing geometries. This classification and its associated notation enable all nucleic acids to be grouped into structural families and bring to light possible structures which have not yet been observed experimentally. It also helps in understanding transitions between families and can assist in the design of multistrand structures. PMID:1383936

  12. Soil classification basing on the spectral characteristics of topsoil samples

    NASA Astrophysics Data System (ADS)

    Liu, Huanjun; Zhang, Xiaokang; Zhang, Xinle

    2016-04-01

    Soil taxonomy plays an important role in soil utility and management, but China has only course soil map created based on 1980s data. New technology, e.g. spectroscopy, could simplify soil classification. The study try to classify soils basing on the spectral characteristics of topsoil samples. 148 topsoil samples of typical soils, including Black soil, Chernozem, Blown soil and Meadow soil, were collected from Songnen plain, Northeast China, and the room spectral reflectance in the visible and near infrared region (400-2500 nm) were processed with weighted moving average, resampling technique, and continuum removal. Spectral indices were extracted from soil spectral characteristics, including the second absorption positions of spectral curve, the first absorption vale's area, and slope of spectral curve at 500-600 nm and 1340-1360 nm. Then K-means clustering and decision tree were used respectively to build soil classification model. The results indicated that 1) the second absorption positions of Black soil and Chernozem were located at 610 nm and 650 nm respectively; 2) the spectral curve of the meadow is similar to its adjacent soil, which could be due to soil erosion; 3) decision tree model showed higher classification accuracy, and accuracy of Black soil, Chernozem, Blown soil and Meadow are 100%, 88%, 97%, 50% respectively, and the accuracy of Blown soil could be increased to 100% by adding one more spectral index (the first two vole's area) to the model, which showed that the model could be used for soil classification and soil map in near future.

  13. Agent-Based Modeling of Growth Processes

    ERIC Educational Resources Information Center

    Abraham, Ralph

    2014-01-01

    Growth processes abound in nature, and are frequently the target of modeling exercises in the sciences. In this article we illustrate an agent-based approach to modeling, in the case of a single example from the social sciences: bullying.

  14. AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*

    PubMed Central

    Bruch, Elizabeth; Atwell, Jon

    2014-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351

  15. Rule based fuzzy logic approach for classification of fibromyalgia syndrome.

    PubMed

    Arslan, Evren; Yildiz, Sedat; Albayrak, Yalcin; Koklukaya, Etem

    2016-06-01

    Fibromyalgia syndrome (FMS) is a chronic muscle and skeletal system disease observed generally in women, manifesting itself with a widespread pain and impairing the individual's quality of life. FMS diagnosis is made based on the American College of Rheumatology (ACR) criteria. However, recently the employability and sufficiency of ACR criteria are under debate. In this context, several evaluation methods, including clinical evaluation methods were proposed by researchers. Accordingly, ACR had to update their criteria announced back in 1990, 2010 and 2011. Proposed rule based fuzzy logic method aims to evaluate FMS at a different angle as well. This method contains a rule base derived from the 1990 ACR criteria and the individual experiences of specialists. The study was conducted using the data collected from 60 inpatient and 30 healthy volunteers. Several tests and physical examination were administered to the participants. The fuzzy logic rule base was structured using the parameters of tender point count, chronic widespread pain period, pain severity, fatigue severity and sleep disturbance level, which were deemed important in FMS diagnosis. It has been observed that generally fuzzy predictor was 95.56 % consistent with at least of the specialists, who are not a creator of the fuzzy rule base. Thus, in diagnosis classification where the severity of FMS was classified as well, consistent findings were obtained from the comparison of interpretations and experiences of specialists and the fuzzy logic approach. The study proposes a rule base, which could eliminate the shortcomings of 1990 ACR criteria during the FMS evaluation process. Furthermore, the proposed method presents a classification on the severity of the disease, which was not available with the ACR criteria. The study was not limited to only disease classification but at the same time the probability of occurrence and severity was classified. In addition, those who were not suffering from FMS were

  16. The Study on Collaborative Manufacturing Platform Based on Agent

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-yan; Qu, Zheng-geng

    To fulfill the trends of knowledge-intensive in collaborative manufacturing development, we have described multi agent architecture supporting knowledge-based platform of collaborative manufacturing development platform. In virtue of wrapper service and communication capacity agents provided, the proposed architecture facilitates organization and collaboration of multi-disciplinary individuals and tools. By effectively supporting the formal representation, capture, retrieval and reuse of manufacturing knowledge, the generalized knowledge repository based on ontology library enable engineers to meaningfully exchange information and pass knowledge across boundaries. Intelligent agent technology increases traditional KBE systems efficiency and interoperability and provides comprehensive design environments for engineers.

  17. The fractional volatility model: An agent-based interpretation

    NASA Astrophysics Data System (ADS)

    Vilela Mendes, R.

    2008-06-01

    Based on the criteria of mathematical simplicity and consistency with empirical market data, a model with volatility driven by fractional noise has been constructed which provides a fairly accurate mathematical parametrization of the data. Here, some features of the model are reviewed and extended to account for leverage effects. Using agent-based models, one tries to find which agent strategies and (or) properties of the financial institutions might be responsible for the features of the fractional volatility model.

  18. Geographical classification of apple based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Zhao, Chunjiang; Peng, Yankun

    2013-05-01

    Attribute of apple according to geographical origin is often recognized and appreciated by the consumers. It is usually an important factor to determine the price of a commercial product. Hyperspectral imaging technology and supervised pattern recognition was attempted to discriminate apple according to geographical origins in this work. Hyperspectral images of 207 Fuji apple samples were collected by hyperspectral camera (400-1000nm). Principal component analysis (PCA) was performed on hyperspectral imaging data to determine main efficient wavelength images, and then characteristic variables were extracted by texture analysis based on gray level co-occurrence matrix (GLCM) from dominant waveband image. All characteristic variables were obtained by fusing the data of images in efficient spectra. Support vector machine (SVM) was used to construct the classification model, and showed excellent performance in classification results. The total classification rate had the high classify accuracy of 92.75% in the training set and 89.86% in the prediction sets, respectively. The overall results demonstrated that the hyperspectral imaging technique coupled with SVM classifier can be efficiently utilized to discriminate Fuji apple according to geographical origins.

  19. Texture based classification of the severity of mitral regurgitation.

    PubMed

    Balodi, Arun; Dewal, M L; Anand, R S; Rawat, Anurag

    2016-06-01

    Clinically, the severity of valvular regurgitation is assessed by manual tracing of the regurgitant jet in the respective chambers. This work presents a computer-aided diagnostic (CAD) system for the assessment of the severity of mitral regurgitation (MR) based on image processing that does not require the intervention of the radiologist or clinician. Eight different texture feature sets from the regurgitant area (selected through an arbitrary criterion) have been used in the present approach. First order statistics have been used initially, however, observing their limitations, the other texture features such as spatial gray level difference matrix, gray level difference statistics, neighborhood gray tone difference matrix, statistical feature matrix, Laws' textures energy measure, fractal dimension texture analysis and Fourier power spectrum have additionally been used. For the classification task a supervised classifier i.e., support vector machine has been used in the present approach. The classification accuracy has been improved significantly by using these texture features in combination, in comparison to when fed individually as input to the classifier. The classification accuracy of 95.65±1.09, 95.65±1.09 and 95.36±1.13 has been obtained in apical two chamber, apical four chamber and parasternal long axis views, respectively. Therefore, the results of this paper indicate that the proposed CAD system may effectively assist the radiologists in establishing (confirming) the MR stages, namely, mild, moderate and severe. PMID:27127894

  20. Risk Classification and Risk-based Safety and Mission Assurance

    NASA Technical Reports Server (NTRS)

    Leitner, Jesse A.

    2014-01-01

    Recent activities to revamp and emphasize the need to streamline processes and activities for Class D missions across the agency have led to various interpretations of Class D, including the lumping of a variety of low-cost projects into Class D. Sometimes terms such as Class D minus are used. In this presentation, mission risk classifications will be traced to official requirements and definitions as a measure to ensure that projects and programs align with the guidance and requirements that are commensurate for their defined risk posture. As part of this, the full suite of risk classifications, formal and informal will be defined, followed by an introduction to the new GPR 8705.4 that is currently under review.GPR 8705.4 lays out guidance for the mission success activities performed at the Classes A-D for NPR 7120.5 projects as well as for projects not under NPR 7120.5. Furthermore, the trends in stepping from Class A into higher risk posture classifications will be discussed. The talk will conclude with a discussion about risk-based safety and mission assuranceat GSFC.

  1. Hippocampal shape analysis: surface-based representation and classification

    NASA Astrophysics Data System (ADS)

    Shen, Li; Ford, James; Makedon, Fillia; Saykin, Andrew

    2003-05-01

    Surface-based representation and classification techniques are studied for hippocampal shape analysis. The goal is twofold: (1) develop a new framework of salient feature extraction and accurate classification for 3D shape data; (2) detect hippocampal abnormalities in schizophrenia using this technique. A fine-scale spherical harmonic expansion is employed to describe a closed 3D surface object. The expansion can then easily be transformed to extract only shape information (i.e., excluding translation, rotation, and scaling) and create a shape descriptor comparable across different individuals. This representation captures shape features and is flexible enough to do shape modeling, identify statistical group differences, and generate similar synthetic shapes. Principal component analysis is used to extract a small number of independent features from high dimensional shape descriptors, and Fisher's linear discriminant is applied for pattern classification. This framework is shown to be able to perform well in distinguishing clear group differences as well as small and noisy group differences using simulated shape data. In addition, the application of this technique to real data indicates that group shape differences exist in hippocampi between healthy controls and schizophrenic patients.

  2. Fruit classification based on weighted score-level feature fusion

    NASA Astrophysics Data System (ADS)

    Kuang, Hulin; Hang Chan, Leanne Lai; Liu, Cairong; Yan, Hong

    2016-01-01

    We describe an object classification method based on weighted score-level feature fusion using learned weights. Our method is able to recognize 20 object classes in a customized fruit dataset. Although the fusion of multiple features is commonly used to distinguish variable object classes, the optimal combination of features is not well defined. Moreover, in these methods, most parameters used for feature extraction are not optimized and the contribution of each feature to an individual class is not considered when determining the weight of the feature. Our algorithm relies on optimizing a single feature during feature selection and learning the weight of each feature for an individual class from the training data using a linear support vector machine before the features are linearly combined with the weights at the score level. The optimal single feature is selected using cross-validation. The optimal combination of features is explored and tested experimentally using a customized fruit dataset with 20 object classes and a variety of complex backgrounds. The experiment results show that the proposed feature fusion method outperforms four state-of-the-art fruit classification algorithms and improves the classification accuracy when compared with some state-of-the-art feature fusion methods.

  3. Gadolinium-Based Contrast Agent Accumulation and Toxicity: An Update.

    PubMed

    Ramalho, J; Semelka, R C; Ramalho, M; Nunes, R H; AlObaidy, M; Castillo, M

    2016-07-01

    In current practice, gadolinium-based contrast agents have been considered safe when used at clinically recommended doses in patients without severe renal insufficiency. The causal relationship between gadolinium-based contrast agents and nephrogenic systemic fibrosis in patients with renal insufficiency resulted in new policies regarding the administration of these agents. After an effective screening of patients with renal disease by performing either unenhanced or reduced-dose-enhanced studies in these patients and by using the most stable contrast agents, nephrogenic systemic fibrosis has been largely eliminated since 2009. Evidence of in vivo gadolinium deposition in bone tissue in patients with normal renal function is well-established, but recent literature showing that gadolinium might also deposit in the brain in patients with intact blood-brain barriers caught many individuals in the imaging community by surprise. The purpose of this review was to summarize the literature on gadolinium-based contrast agents, tying together information on agent stability and animal and human studies, and to emphasize that low-stability agents are the ones most often associated with brain deposition. PMID:26659341

  4. Agent based modeling of the coevolution of hostility and pacifism

    NASA Astrophysics Data System (ADS)

    Dalmagro, Fermin; Jimenez, Juan

    2015-01-01

    We propose a model based on a population of agents whose states represent either hostile or peaceful behavior. Randomly selected pairs of agents interact according to a variation of the Prisoners Dilemma game, and the probabilities that the agents behave aggressively or not are constantly updated by the model so that the agents that remain in the game are those with the highest fitness. We show that the population of agents oscillate between generalized conflict and global peace, without either reaching a stable state. We then use this model to explain some of the emergent behaviors in collective conflicts, by comparing the simulated results with empirical data obtained from social systems. In particular, using public data reports we show how the model precisely reproduces interesting quantitative characteristics of diverse types of armed conflicts, public protests, riots and strikes.

  5. Laser-based instrumentation for the detection of chemical agents

    SciTech Connect

    Hartford, A. Jr.; Sander, R.K.; Quigley, G.P.; Radziemski, L.J.; Cremers, D.A.

    1982-01-01

    Several laser-based techniques are being evaluated for the remote, point, and surface detection of chemical agents. Among the methods under investigation are optoacoustic spectroscopy, laser-induced breakdown spectroscopy (LIBS), and synchronous detection of laser-induced fluorescence (SDLIF). Optoacoustic detection has already been shown to be capable of extremely sensitive point detection. Its application to remote sensing of chemical agents is currently being evaluated. Atomic emission from the region of a laser-generated plasma has been used to identify the characteristic elements contained in nerve (P and F) and blister (S and Cl) agents. Employing this LIBS approach, detection of chemical agent simulants dispersed in air and adsorbed on a variety of surfaces has been achieved. Synchronous detection of laser-induced fluorescence provides an attractive alternative to conventional LIF, in that an artificial narrowing of the fluorescence emission is obtained. The application of this technique to chemical agent simulants has been successfully demonstrated. 19 figures.

  6. In vitro antimicrobial activity of peroxide-based bleaching agents.

    PubMed

    Napimoga, Marcelo Henrique; de Oliveira, Rogério; Reis, André Figueiredo; Gonçalves, Reginaldo Bruno; Giannini, Marcelo

    2007-06-01

    Antibacterial activity of 4 commercial bleaching agents (Day White, Colgate Platinum, Whiteness 10% and 16%) on 6 oral pathogens (Streptococcus mutans, Streptococcus sobrinus, Streptococcus sanguinis, Candida albicans, Lactobacillus casei, and Lactobacillus acidophilus) and Staphylococcus aureus were evaluated. A chlorhexidine solution was used as a positive control, while distilled water was the negative control. Bleaching agents and control materials were inserted in sterilized stainless-steel cylinders that were positioned under inoculated agar plate (n = 4). After incubation according to the appropriate period of time for each microorganism, the inhibition zones were measured. Data were analyzed by 2-way analysis of variance and Tukey test (a = 0.05). All bleaching agents and the chlorhexidine solution produced antibacterial inhibition zones. Antimicrobial activity was dependent on peroxide-based bleaching agents. For most microorganisms evaluated, bleaching agents produced inhibition zones similar to or larger than that observed for chlorhexidine. C albicans, L casei, and L acidophilus were the most resistant microorganisms. PMID:17625621

  7. MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS

    EPA Science Inventory

    Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...

  8. Multi-issue Agent Negotiation Based on Fairness

    NASA Astrophysics Data System (ADS)

    Zuo, Baohe; Zheng, Sue; Wu, Hong

    Agent-based e-commerce service has become a hotspot now. How to make the agent negotiation process quickly and high-efficiently is the main research direction of this area. In the multi-issue model, MAUT(Multi-attribute Utility Theory) or its derived theory usually consider little about the fairness of both negotiators. This work presents a general model of agent negotiation which considered the satisfaction of both negotiators via autonomous learning. The model can evaluate offers from the opponent agent based on the satisfaction degree, learn online to get the opponent's knowledge from interactive instances of history and negotiation of this time, make concessions dynamically based on fair object. Through building the optimal negotiation model, the bilateral negotiation achieved a higher efficiency and fairer deal.

  9. Agent-based scheduling system to achieve agility

    NASA Astrophysics Data System (ADS)

    Akbulut, Muhtar B.; Kamarthi, Sagar V.

    2000-12-01

    Today's competitive enterprises need to design, develop, and manufacture their products rapidly and inexpensively. Agile manufacturing has emerged as a new paradigm to meet these challenges. Agility requires, among many other things, scheduling and control software systems that are flexible, robust, and adaptive. In this paper a new agent-based scheduling system (ABBS) is developed to meet the challenges of an agile manufacturing system. In ABSS, unlike in the traditional approaches, information and decision making capabilities are distributed among the system entities called agents. In contrast with the most agent-based scheduling systems which commonly use a bidding approach, the ABBS employs a global performance monitoring strategy. A production-rate-based global performance metric which effectively assesses the system performance is developed to assist the agents' decision making process. To test the architecture, an agent-based discrete event simulation software is developed. The experiments performed using the simulation software yielded encouraging results in supporting the applicability of agent-based systems to address the scheduling and control needs of an agile manufacturing system.

  10. A Chemistry-Based Classification for Peridotite Xenoliths

    NASA Astrophysics Data System (ADS)

    Block, K. A.; Ducea, M.; Raye, U.; Stern, R. J.; Anthony, E. Y.; Lehnert, K. A.

    2007-12-01

    The development of a petrological and geochemical database for mantle xenoliths is important for interpreting EarthScope geophysical results. Interpretation of compositional characteristics of xenoliths requires a sound basis for comparing geochemical results, even when no petrographic modes are available. Peridotite xenoliths are generally classified on the basis of mineralogy (Streckeisen, 1973) derived from point-counting methods. Modal estimates, particularly on heterogeneous samples, are conducted using various methodologies and are therefore subject to large statistical error. Also, many studies simply do not report the modes. Other classifications for peridotite xenoliths based on host matrix or tectonic setting (cratonic vs. non-cratonic) are poorly defined and provide little information on where samples from transitional settings fit within a classification scheme (e.g., xenoliths from circum-cratonic locations). We present here a classification for peridotite xenoliths based on bulk rock major element chemistry, which is one of the most common types of data reported in the literature. A chemical dataset of over 1150 peridotite xenoliths is compiled from two online geochemistry databases, the EarthChem Deep Lithosphere Dataset and from GEOROC (http://www.earthchem.org), and is downloaded with the rock names reported in the original publications. Ternary plots of combinations of the SiO2- CaO-Al2O3-MgO (SCAM) components display sharp boundaries that define the dunite, harzburgite, lherzolite, or wehrlite-pyroxenite fields and provide a graphical basis for classification. In addition, for the CaO-Al2O3-MgO (CAM) diagram, a boundary between harzburgite and lherzolite at approximately 19% CaO is defined by a plot of over 160 abyssal peridotite compositions calculated from observed modes using the methods of Asimow (1999) and Baker and Beckett (1999). We anticipate that our SCAM classification is a first step in the development of a uniform basis for

  11. Performance verification of a LIF-LIDAR technique for stand-off detection and classification of biological agents

    NASA Astrophysics Data System (ADS)

    Wojtanowski, Jacek; Zygmunt, Marek; Muzal, Michał; Knysak, Piotr; Młodzianko, Andrzej; Gawlikowski, Andrzej; Drozd, Tadeusz; Kopczyński, Krzysztof; Mierczyk, Zygmunt; Kaszczuk, Mirosława; Traczyk, Maciej; Gietka, Andrzej; Piotrowski, Wiesław; Jakubaszek, Marcin; Ostrowski, Roman

    2015-04-01

    LIF (laser-induced fluorescence) LIDAR (light detection and ranging) is one of the very few promising methods in terms of long-range stand-off detection of air-borne biological particles. A limited classification of the detected material also appears as a feasible asset. We present the design details and hardware setup of the developed range-resolved multichannel LIF-LIDAR system. The device is based on two pulsed UV laser sources operating at 355 nm and 266 nm wavelength (3rd and 4th harmonic of Nd:YAG, Q-switched solid-state laser, respectively). Range-resolved fluorescence signals are collected in 28 channels of compound PMT sensor coupled with Czerny-Turner spectrograph. The calculated theoretical sensitivities are confronted with the results obtained during measurement field campaign. Classification efforts based on 28-digit fluorescence spectral signatures linear processing are also presented.

  12. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid

  13. S1 gene-based phylogeny of infectious bronchitis virus: An attempt to harmonize virus classification.

    PubMed

    Valastro, Viviana; Holmes, Edward C; Britton, Paul; Fusaro, Alice; Jackwood, Mark W; Cattoli, Giovanni; Monne, Isabella

    2016-04-01

    Infectious bronchitis virus (IBV) is the causative agent of a highly contagious disease that results in severe economic losses to the global poultry industry. The virus exists in a wide variety of genetically distinct viral types, and both phylogenetic analysis and measures of pairwise similarity among nucleotide or amino acid sequences have been used to classify IBV strains. However, there is currently no consensus on the method by which IBV sequences should be compared, and heterogeneous genetic group designations that are inconsistent with phylogenetic history have been adopted, leading to the confusing coexistence of multiple genotyping schemes. Herein, we propose a simple and repeatable phylogeny-based classification system combined with an unambiguous and rationale lineage nomenclature for the assignment of IBV strains. By using complete nucleotide sequences of the S1 gene we determined the phylogenetic structure of IBV, which in turn allowed us to define 6 genotypes that together comprise 32 distinct viral lineages and a number of inter-lineage recombinants. Because of extensive rate variation among IBVs, we suggest that the inference of phylogenetic relationships alone represents a more appropriate criterion for sequence classification than pairwise sequence comparisons. The adoption of an internationally accepted viral nomenclature is crucial for future studies of IBV epidemiology and evolution, and the classification scheme presented here can be updated and revised novel S1 sequences should become available. PMID:26883378

  14. An Agent-Based Interface to Terrestrial Ecological Forecasting

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Nemani, Ramakrishna; Pang, Wan-Lin; Votava, Petr; Etzioni, Oren

    2004-01-01

    This paper describes a flexible agent-based ecological forecasting system that combines multiple distributed data sources and models to provide near-real-time answers to questions about the state of the Earth system We build on novel techniques in automated constraint-based planning and natural language interfaces to automatically generate data products based on descriptions of the desired data products.

  15. Application of Bayesian Classification to Content-Based Data Management

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  16. The Development of Sugar-Based Anti-Melanogenic Agents

    PubMed Central

    Bin, Bum-Ho; Kim, Sung Tae; Bhin, Jinhyuk; Lee, Tae Ryong; Cho, Eun-Gyung

    2016-01-01

    The regulation of melanin production is important for managing skin darkness and hyperpigmentary disorders. Numerous anti-melanogenic agents that target tyrosinase activity/stability, melanosome maturation/transfer, or melanogenesis-related signaling pathways have been developed. As a rate-limiting enzyme in melanogenesis, tyrosinase has been the most attractive target, but tyrosinase-targeted treatments still pose serious potential risks, indicating the necessity of developing lower-risk anti-melanogenic agents. Sugars are ubiquitous natural compounds found in humans and other organisms. Here, we review the recent advances in research on the roles of sugars and sugar-related agents in melanogenesis and in the development of sugar-based anti-melanogenic agents. The proposed mechanisms of action of these agents include: (a) (natural sugars) disturbing proper melanosome maturation by inducing osmotic stress and inhibiting the PI3 kinase pathway and (b) (sugar derivatives) inhibiting tyrosinase maturation by blocking N-glycosylation. Finally, we propose an alternative strategy for developing anti-melanogenic sugars that theoretically reduce melanosomal pH by inhibiting a sucrose transporter and reduce tyrosinase activity by inhibiting copper incorporation into an active site. These studies provide evidence of the utility of sugar-based anti-melanogenic agents in managing skin darkness and curing pigmentary disorders and suggest a future direction for the development of physiologically favorable anti-melanogenic agents. PMID:27092497

  17. Agent-based services for B2B electronic commerce

    NASA Astrophysics Data System (ADS)

    Fong, Elizabeth; Ivezic, Nenad; Rhodes, Tom; Peng, Yun

    2000-12-01

    The potential of agent-based systems has not been realized yet, in part, because of the lack of understanding of how the agent technology supports industrial needs and emerging standards. The area of business-to-business electronic commerce (b2b e-commerce) is one of the most rapidly developing sectors of industry with huge impact on manufacturing practices. In this paper, we investigate the current state of agent technology and the feasibility of applying agent-based computing to b2b e-commerce in the circuit board manufacturing sector. We identify critical tasks and opportunities in the b2b e-commerce area where agent-based services can best be deployed. We describe an implemented agent-based prototype system to facilitate the bidding process for printed circuit board manufacturing and assembly. These activities are taking place within the Internet Commerce for Manufacturing (ICM) project, the NIST- sponsored project working with industry to create an environment where small manufacturers of mechanical and electronic components may participate competitively in virtual enterprises that manufacture printed circuit assemblies.

  18. Object-Based Greenhouse Classification from High Resolution Satellite Imagery: a Case Study Antalya-Turkey

    NASA Astrophysics Data System (ADS)

    Coslu, M.; Sonmez, N. K.; Koc-San, D.

    2016-06-01

    Pixel-based classification method is widely used with the purpose of detecting land use and land cover with remote sensing technology. Recently, object-based classification methods have begun to be used as well as pixel-based classification method on high resolution satellite imagery. In the studies conducted, it is indicated that object-based classification method has more successful results than other classification methods. While pixel-based classification method is performed according to the grey value of pixels, object-based classification process is executed by generating imagery segmentation and updatable rule sets. In this study, it was aimed to detect and map the greenhouses from object-based classification method by using high resolution satellite imagery. The study was carried out in the Antalya province which includes greenhouse intensively. The study consists of three main stages including segmentation, classification and accuracy assessment. At the first stage, which was segmentation, the most important part of the object-based imagery analysis; imagery segmentation was generated by using basic spectral bands of high resolution Worldview-2 satellite imagery. At the second stage, applying the nearest neighbour classifier to these generated segments classification process was executed, and a result map of the study area was generated. Finally, accuracy assessments were performed using land studies and digital data of the area. According to the research results, object-based greenhouse classification using high resolution satellite imagery had over 80% accuracy.

  19. Tutorial on agent-based modeling and simulation.

    SciTech Connect

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2005-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS is a third way of doing science besides deductive and inductive reasoning. Computational advances have made possible a growing number of agent-based applications in a variety of fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling consumer behavior to understanding the fall of ancient civilizations, to name a few. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing ABMS models, and provides some thoughts on the relationship between ABMS and traditional modeling techniques.

  20. Inorganic nanoparticle-based contrast agents for molecular imaging

    PubMed Central

    Cho, Eun Chul; Glaus, Charles; Chen, Jingyi; Welch, Michael J.; Xia, Younan

    2010-01-01

    Inorganic nanoparticles including semiconductor quantum dots, iron oxide nanoparticles, and gold nanoparticles have been developed as contrast agents for diagnostics by molecular imaging. Compared to traditional contrast agents, nanoparticles offer several advantages: their optical and magnetic properties can be tailored by engineering the composition, structure, size, and shape; their surfaces can be modified with ligands to target specific biomarkers of disease; the contrast enhancement provided can be equivalent to millions of molecular counterparts; and they can be integrated with a combination of different functions for multi-modal imaging. Here, we review recent advances in the development of contrast agents based on inorganic nanoparticles for molecular imaging, with a touch on contrast enhancement, surface modification, tissue targeting, clearance, and toxicity. As research efforts intensify, contrast agents based on inorganic nanoparticles that are highly sensitive, target-specific, and safe to use are expected to enter clinical applications in the near future. PMID:21074494

  1. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  2. Commercial Shot Classification Based on Multiple Features Combination

    NASA Astrophysics Data System (ADS)

    Liu, Nan; Zhao, Yao; Zhu, Zhenfeng; Ni, Rongrong

    This paper presents a commercial shot classification scheme combining well-designed visual and textual features to automatically detect TV commercials. To identify the inherent difference between commercials and general programs, a special mid-level textual descriptor is proposed, aiming to capture the spatio-temporal properties of the video texts typical of commercials. In addition, we introduce an ensemble-learning based combination method, named Co-AdaBoost, to interactively exploit the intrinsic relations between the visual and textual features employed.

  3. Feature selection gait-based gender classification under different circumstances

    NASA Astrophysics Data System (ADS)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  4. Classification Based on Hierarchical Linear Models: The Need for Incorporation of Social Contexts in Classification Analysis

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Qui

    2009-01-01

    Many areas in educational and psychological research involve the use of classification statistical analysis. For example, school districts might be interested in attaining variables that provide optimal prediction of school dropouts. In psychology, a researcher might be interested in the classification of a subject into a particular psychological…

  5. Agent-based modeling and simulation Part 3 : desktop ABMS.

    SciTech Connect

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2007-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS 'is a third way of doing science,' in addition to traditional deductive and inductive reasoning (Axelrod 1997b). Computational advances have made possible a growing number of agent-based models across a variety of application domains. Applications range from modeling agent behavior in the stock market, supply chains, and consumer markets, to predicting the spread of epidemics, the threat of bio-warfare, and the factors responsible for the fall of ancient civilizations. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing agent models, and illustrates the development of a simple agent-based model of shopper behavior using spreadsheets.

  6. 3-Hydrazinoindolin-2-one derivatives: Chemical classification and investigation of their targets as anticancer agents.

    PubMed

    Ibrahim, Hany S; Abou-Seri, Sahar M; Abdel-Aziz, Hatem A

    2016-10-21

    Isatin is a well acknowledged pharmacophore in many clinically approved drugs used for treatment of cancer. 3-Hydrazinoindolin-2-one, as a derivative of isatin, represents a pharmacophore of an important class of biologically active pharmaceutical agents by virtue of their diverse biological activities. In this review, anticancer activity will be on focus for compounds derived from 3-hydrazinoindolin-2-one. They are classified according to their chemical structure into nine different classes. In each class, different compounds were browsed, showing their anticancer activity and their potential targets. Moreover, crystallographic data or docking studies were highlighted for some compounds, when available, to provide a deep understanding of their mechanisms of action. PMID:27391135

  7. Agent-Based Simulations for Project Management

    NASA Technical Reports Server (NTRS)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  8. Evaluating Water Demand Using Agent-Based Modeling

    NASA Astrophysics Data System (ADS)

    Lowry, T. S.

    2004-12-01

    The supply and demand of water resources are functions of complex, inter-related systems including hydrology, climate, demographics, economics, and policy. To assess the safety and sustainability of water resources, planners often rely on complex numerical models that relate some or all of these systems using mathematical abstractions. The accuracy of these models relies on how well the abstractions capture the true nature of the systems interactions. Typically, these abstractions are based on analyses of observations and/or experiments that account only for the statistical mean behavior of each system. This limits the approach in two important ways: 1) It cannot capture cross-system disruptive events, such as major drought, significant policy change, or terrorist attack, and 2) it cannot resolve sub-system level responses. To overcome these limitations, we are developing an agent-based water resources model that includes the systems of hydrology, climate, demographics, economics, and policy, to examine water demand during normal and extraordinary conditions. Agent-based modeling (ABM) develops functional relationships between systems by modeling the interaction between individuals (agents), who behave according to a probabilistic set of rules. ABM is a "bottom-up" modeling approach in that it defines macro-system behavior by modeling the micro-behavior of individual agents. While each agent's behavior is often simple and predictable, the aggregate behavior of all agents in each system can be complex, unpredictable, and different than behaviors observed in mean-behavior models. Furthermore, the ABM approach creates a virtual laboratory where the effects of policy changes and/or extraordinary events can be simulated. Our model, which is based on the demographics and hydrology of the Middle Rio Grande Basin in the state of New Mexico, includes agent groups of residential, agricultural, and industrial users. Each agent within each group determines its water usage

  9. Nanochemistry of Protein-Based Delivery Agents

    PubMed Central

    Rajendran, Subin R. C. K.; Udenigwe, Chibuike C.; Yada, Rickey Y.

    2016-01-01

    The past decade has seen an increased interest in the conversion of food proteins into functional biomaterials, including their use for loading and delivery of physiologically active compounds such as nutraceuticals and pharmaceuticals. Proteins possess a competitive advantage over other platforms for the development of nanodelivery systems since they are biocompatible, amphipathic, and widely available. Proteins also have unique molecular structures and diverse functional groups that can be selectively modified to alter encapsulation and release properties. A number of physical and chemical methods have been used for preparing protein nanoformulations, each based on different underlying protein chemistry. This review focuses on the chemistry of the reorganization and/or modification of proteins into functional nanostructures for delivery, from the perspective of their preparation, functionality, stability and physiological behavior. PMID:27489854

  10. Nanochemistry of Protein-Based Delivery Agents.

    PubMed

    Rajendran, Subin R C K; Udenigwe, Chibuike C; Yada, Rickey Y

    2016-01-01

    The past decade has seen an increased interest in the conversion of food proteins into functional biomaterials, including their use for loading and delivery of physiologically active compounds such as nutraceuticals and pharmaceuticals. Proteins possess a competitive advantage over other platforms for the development of nanodelivery systems since they are biocompatible, amphipathic, and widely available. Proteins also have unique molecular structures and diverse functional groups that can be selectively modified to alter encapsulation and release properties. A number of physical and chemical methods have been used for preparing protein nanoformulations, each based on different underlying protein chemistry. This review focuses on the chemistry of the reorganization and/or modification of proteins into functional nanostructures for delivery, from the perspective of their preparation, functionality, stability and physiological behavior. PMID:27489854

  11. Style-based classification of Chinese ink and wash paintings

    NASA Astrophysics Data System (ADS)

    Sheng, Jiachuan; Jiang, Jianmin

    2013-09-01

    Following the fact that a large collection of ink and wash paintings (IWP) is being digitized and made available on the Internet, their automated content description, analysis, and management are attracting attention across research communities. While existing research in relevant areas is primarily focused on image processing approaches, a style-based algorithm is proposed to classify IWPs automatically by their authors. As IWPs do not have colors or even tones, the proposed algorithm applies edge detection to locate the local region and detect painting strokes to enable histogram-based feature extraction and capture of important cues to reflect the styles of different artists. Such features are then applied to drive a number of neural networks in parallel to complete the classification, and an information entropy balanced fusion is proposed to make an integrated decision for the multiple neural network classification results in which the entropy is used as a pointer to combine the global and local features. Evaluations via experiments support that the proposed algorithm achieves good performances, providing excellent potential for computerized analysis and management of IWPs.

  12. Sparse graph-based transduction for image classification

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Yang, Dan; Zhou, Jia; Huangfu, Lunwen; Zhang, Xiaohong

    2015-03-01

    Motivated by the remarkable successes of graph-based transduction (GT) and sparse representation (SR), we present a classifier named sparse graph-based classifier (SGC) for image classification. In SGC, SR is leveraged to measure the correlation (similarity) of every two samples and a graph is constructed for encoding these correlations. Then the Laplacian eigenmapping is adopted for deriving the graph Laplacian of the graph. Finally, SGC can be obtained by plugging the graph Laplacian into the conventional GT framework. In the image classification procedure, SGC utilizes the correlations which are encoded in the learned graph Laplacian, to infer the labels of unlabeled images. SGC inherits the merits of both GT and SR. Compared to SR, SGC improves the robustness and the discriminating power of GT. Compared to GT, SGC sufficiently exploits the whole data. Therefore, it alleviates the undercomplete dictionary issue suffered by SR. Four popular image databases are employed for evaluation. The results demonstrate that SGC can achieve a promising performance in comparison with the state-of-the-art classifiers, particularly in the small training sample size case and the noisy sample case.

  13. ECG-based heartbeat classification for arrhythmia detection: A survey.

    PubMed

    Luz, Eduardo José da S; Schwartz, William Robson; Cámara-Chávez, Guillermo; Menotti, David

    2016-04-01

    An electrocardiogram (ECG) measures the electric activity of the heart and has been widely used for detecting heart diseases due to its simplicity and non-invasive nature. By analyzing the electrical signal of each heartbeat, i.e., the combination of action impulse waveforms produced by different specialized cardiac tissues found in the heart, it is possible to detect some of its abnormalities. In the last decades, several works were developed to produce automatic ECG-based heartbeat classification methods. In this work, we survey the current state-of-the-art methods of ECG-based automated abnormalities heartbeat classification by presenting the ECG signal preprocessing, the heartbeat segmentation techniques, the feature description methods and the learning algorithms used. In addition, we describe some of the databases used for evaluation of methods indicated by a well-known standard developed by the Association for the Advancement of Medical Instrumentation (AAMI) and described in ANSI/AAMI EC57:1998/(R)2008 (ANSI/AAMI, 2008). Finally, we discuss limitations and drawbacks of the methods in the literature presenting concluding remarks and future challenges, and also we propose an evaluation process workflow to guide authors in future works. PMID:26775139

  14. Automated object-based classification of topography from SRTM data

    NASA Astrophysics Data System (ADS)

    Drăguţ, Lucian; Eisank, Clemens

    2012-03-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.

  15. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  16. Pixel classification based color image segmentation using quaternion exponent moments.

    PubMed

    Wang, Xiang-Yang; Wu, Zhi-Fang; Chen, Liang; Zheng, Hong-Liang; Yang, Hong-Ying

    2016-02-01

    Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. In this paper, we propose a pixel classification based color image segmentation using quaternion exponent moments. Firstly, the pixel-level image feature is extracted based on quaternion exponent moments (QEMs), which can capture effectively the image pixel content by considering the correlation between different color channels. Then, the pixel-level image feature is used as input of twin support vector machines (TSVM) classifier, and the TSVM model is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained TSVM model. The proposed scheme has the following advantages: (1) the effective QEMs is introduced to describe color image pixel content, which considers the correlation between different color channels, (2) the excellent TSVM classifier is utilized, which has lower computation time and higher classification accuracy. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature. PMID:26618250

  17. Classification of emerald based on multispectral image and PCA

    NASA Astrophysics Data System (ADS)

    Yang, Weiping; Zhao, Dazun; Huang, Qingmei; Ren, Pengyuan; Feng, Jie; Zhang, Xiaoyan

    2005-02-01

    Traditionally, the grade discrimination and classifying of bowlders (emeralds) are implemented by using methods based on people's experiences. In our previous works, a method based on NCS(Natural Color System) color system and sRGB color space conversion is employed for a coarse grade classification of emeralds. However, it is well known that the color match of two colors is not a true "match" unless their spectra are the same. Because metameric colors can not be differentiated by a three channel(RGB) camera, a multispectral camera(MSC) is used as image capturing device in this paper. It consists of a trichromatic digital camera and a set of wide-band filters. The spectra are obtained by measuring a series of natural bowlders(emeralds) samples. Principal component analysis(PCA) method is employed to get some spectral eigenvectors. During the fine classification, the color difference and RMS of spectrum difference between estimated and original spectra are used as criterion. It has been shown that 6 eigenvectors are enough to reconstruct reflection spectra of the testing samples.

  18. No-reference image quality metric based on image classification

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Lee, Chulhee

    2011-12-01

    In this article, we present a new no-reference (NR) objective image quality metric based on image classification. We also propose a new blocking metric and a new blur metric. Both metrics are NR metrics since they need no information from the original image. The blocking metric was computed by considering that the visibility of horizontal and vertical blocking artifacts can change depending on background luminance levels. When computing the blur metric, we took into account the fact that blurring in edge regions is generally more sensitive to the human visual system. Since different compression standards usually produce different compression artifacts, we classified images into two classes using the proposed blocking metric: one class that contained blocking artifacts and another class that did not contain blocking artifacts. Then, we used different quality metrics based on the classification results. Experimental results show that each metric correlated well with subjective ratings, and the proposed NR image quality metric consistently provided good performance with various types of content and distortions.

  19. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  20. Peatland classification of West Siberia based on Landsat imagery

    NASA Astrophysics Data System (ADS)

    Terentieva, I.; Glagolev, M.; Lapshina, E.; Maksyutov, S. S.

    2014-12-01

    Increasing interest in peatlands for prediction of environmental changes requires an understanding of its geographical distribution. West Siberia Plain is the biggest peatland area in Eurasia and is situated in the high latitudes experiencing enhanced rate of climate change. West Siberian taiga mires are important globally, accounting for about 12.5% of the global wetland area. A number of peatland maps of the West Siberia was developed in 1970s, but their accuracy is limited. Here we report the effort in mapping West Siberian peatlands using 30 m resolution Landsat imagery. As a first step, peatland classification scheme oriented on environmental parameter upscaling was developed. The overall workflow involves data pre-processing, training data collection, image classification on a scene-by-scene basis, regrouping of the derived classes into final peatland types and accuracy assessment. To avoid misclassification peatlands were distinguished from other landscapes using threshold method: for each scene, Green-Red Vegetation Indices was used for peatland masking and 5th channel was used for masking water bodies. Peatland image masks were made in Quantum GIS, filtered in MATLAB and then classified in Multispec (Purdue Research Foundation) using maximum likelihood algorithm of supervised classification method. Training sample selection was mostly based on spectral signatures due to limited ancillary and high-resolution image data. As an additional source of information, we applied our field knowledge resulting from more than 10 years of fieldwork in West Siberia summarized in an extensive dataset of botanical relevés, field photos, pH and electrical conductivity data from 40 test sites. After the classification procedure, discriminated spectral classes were generalized into 12 peatland types. Overall accuracy assessment was based on 439 randomly assigned test sites showing final map accuracy was 80%. Total peatland area was estimated at 73.0 Mha. Various ridge

  1. Intelligent Agent-Based Intrusion Detection System Using Enhanced Multiclass SVM

    PubMed Central

    Ganapathy, S.; Yogesh, P.; Kannan, A.

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set. PMID:23056036

  2. Agent-based simulation of a financial market

    NASA Astrophysics Data System (ADS)

    Raberto, Marco; Cincotti, Silvano; Focardi, Sergio M.; Marchesi, Michele

    2001-10-01

    This paper introduces an agent-based artificial financial market in which heterogeneous agents trade one single asset through a realistic trading mechanism for price formation. Agents are initially endowed with a finite amount of cash and a given finite portfolio of assets. There is no money-creation process; the total available cash is conserved in time. In each period, agents make random buy and sell decisions that are constrained by available resources, subject to clustering, and dependent on the volatility of previous periods. The model proposed herein is able to reproduce the leptokurtic shape of the probability density of log price returns and the clustering of volatility. Implemented using extreme programming and object-oriented technology, the simulator is a flexible computational experimental facility that can find applications in both academic and industrial research projects.

  3. Agents and Data Mining in Bioinformatics: Joining Data Gathering and Automatic Annotation with Classification and Distributed Clustering

    NASA Astrophysics Data System (ADS)

    Bazzan, Ana L. C.

    Multiagent systems and data mining techniques are being frequently used in genome projects, especially regarding the annotation process (annotation pipeline). This paper discusses annotation-related problems where agent-based and/or distributed data mining has been successfully employed.

  4. Utilizing ECG-Based Heartbeat Classification for Hypertrophic Cardiomyopathy Identification.

    PubMed

    Rahman, Quazi Abidur; Tereshchenko, Larisa G; Kongkatong, Matthew; Abraham, Theodore; Abraham, M Roselle; Shatkay, Hagit

    2015-07-01

    Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is (potentially fatally) obstructed. A test based on electrocardiograms (ECG) that record the heart electrical activity can help in early detection of HCM patients. This paper presents a cardiovascular-patient classifier we developed to identify HCM patients using standard 10-second, 12-lead ECG signals. Patients are classified as having HCM if the majority of their recorded heartbeats are recognized as characteristic of HCM. Thus, the classifier's underlying task is to recognize individual heartbeats segmented from 12-lead ECG signals as HCM beats, where heartbeats from non-HCM cardiovascular patients are used as controls. We extracted 504 morphological and temporal features—both commonly used and newly-developed ones—from ECG signals for heartbeat classification. To assess classification performance, we trained and tested a random forest classifier and a support vector machine classifier using 5-fold cross validation. We also compared the performance of these two classifiers to that obtained by a logistic regression classifier, and the first two methods performed better than logistic regression. The patient-classification precision of random forests and of support vector machine classifiers is close to 0.85. Recall (sensitivity) and specificity are approximately 0.90. We also conducted feature selection experiments by gradually removing the least informative features; the results show that a relatively small subset of 264 highly informative features can achieve performance measures comparable to those achieved by using the complete set of features. PMID:25915962

  5. Kernel-based machine learning techniques for infrasound signal classification

    NASA Astrophysics Data System (ADS)

    Tuma, Matthias; Igel, Christian; Mialle, Pierrick

    2014-05-01

    Infrasound monitoring is one of four remote sensing technologies continuously employed by the CTBTO Preparatory Commission. The CTBTO's infrasound network is designed to monitor the Earth for potential evidence of atmospheric or shallow underground nuclear explosions. Upon completion, it will comprise 60 infrasound array stations distributed around the globe, of which 47 were certified in January 2014. Three stages can be identified in CTBTO infrasound data processing: automated processing at the level of single array stations, automated processing at the level of the overall global network, and interactive review by human analysts. At station level, the cross correlation-based PMCC algorithm is used for initial detection of coherent wavefronts. It produces estimates for trace velocity and azimuth of incoming wavefronts, as well as other descriptive features characterizing a signal. Detected arrivals are then categorized into potentially treaty-relevant versus noise-type signals by a rule-based expert system. This corresponds to a binary classification task at the level of station processing. In addition, incoming signals may be grouped according to their travel path in the atmosphere. The present work investigates automatic classification of infrasound arrivals by kernel-based pattern recognition methods. It aims to explore the potential of state-of-the-art machine learning methods vis-a-vis the current rule-based and task-tailored expert system. To this purpose, we first address the compilation of a representative, labeled reference benchmark dataset as a prerequisite for both classifier training and evaluation. Data representation is based on features extracted by the CTBTO's PMCC algorithm. As classifiers, we employ support vector machines (SVMs) in a supervised learning setting. Different SVM kernel functions are used and adapted through different hyperparameter optimization routines. The resulting performance is compared to several baseline classifiers. All

  6. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  7. Macromolecular and Dendrimer Based Magnetic Resonance Contrast Agents

    PubMed Central

    Bumb, Ambika; Brechbiel, Martin W.; Choyke, Peter

    2010-01-01

    Magnetic resonance imaging (MRI) is a powerful imaging modality that can provide an assessment of function or molecular expression in tandem with anatomic detail. Over the last 20–25 years, a number of gadolinium based MR contrast agents have been developed to enhance signal by altering proton relaxation properties. This review explores a range of these agents from small molecule chelates, such as Gd-DTPA and Gd-DOTA, to macromolecular structures composed of albumin, polylysine, polysaccharides (dextran, inulin, starch), poly(ethylene glycol), copolymers of cystamine and cystine with GD-DTPA, and various dendritic structures based on polyamidoamine and polylysine (Gadomers). The synthesis, structure, biodistribution and targeting of dendrimer-based MR contrast agents are also discussed. PMID:20590365

  8. Bionanoconjugate-based composites for decontamination of nerve agents.

    PubMed

    Borkar, Indrakant V; Dinu, Cerasela Zoica; Zhu, Guangyu; Kane, Ravi S; Dordick, Jonathan S

    2010-01-01

    We have developed enzyme-based composites that rapidly and effectively detoxify simulants of V- and G-type chemical warfare nerve agents. The approach was based on the efficient immobilization of organophosphorus hydrolase onto carbon nanotubes to form active and stable conjugates that were easily entrapped in commercially available paints. The resulting catalytic-based composites showed no enzyme leaching and rendered >99% decontamination of 10 g/m(2) paraoxon, a simulant of the V-type nerve agent, in 30 minutes and >95% decontamination of diisopropylfluorophosphate, a simulant of G-type nerve agent, in 45 minutes. The formulations are expected to be environmentally friendly and to offer an easy to use, on demand, decontamination alternative to chemical approaches for sustainable material self-decontamination. PMID:20859933

  9. A procedure for blending manual and correlation-based synoptic classifications

    NASA Astrophysics Data System (ADS)

    Frakes, Brent; Yarnal, Brent

    1997-11-01

    Manual and correlation-based (also known as Lund or Kirchhofer) classifications are important to synoptic climatology, but both have significant drawbacks. Manual classifications are inherently subjective and labour intensive, whereas correlation-based classifications give the investigator little control over the map-patterns generated by the computer. This paper develops a simple procedure that combines these two classification methods, thereby minimizing these weaknesses. The hybrid procedure utilizes a relatively short-term manual classification to generate composite pressure surfaces, which are then used as seeds in a long-term correlation-based computer classification. Overall, the results show that the hybrid classification reproduces the manual classification while optimizing speed, objectivity and investigator control, thus suggesting that the hybrid procedure is superior to the manual or correlation classifications as they are currently used. More specifically, the results demonstrate little difference between the hybrid procedure and the original manual classification at monthly and longer time-scales, with less internal variation in the hybrid types than in the subjective categories. However, the two classifications showed substantial differences at the daily level, not because of poor performance by the hybrid procedure, but because of errors introduced by the subjectivity of the manual classification.

  10. Model-based classification of visual information for content-based retrieval

    NASA Astrophysics Data System (ADS)

    Jaimes, Alejandro; Chang, Shih-Fu

    1998-12-01

    Most existing approaches to content-based retrieval rely on query by example, or user sketch based on low-level features. However, these are not suitable for semantic (object level) distinctions. In other approaches, information is classified according to a predefined set of classes and classification is either performed manually or by using class-specific algorithms. Most of these systems lack flexibility: the user does not have the ability to define or change the classes, and new classification schemes require implementation of new class-specific algorithms and/or the input of an expert. In this paper, we present a different approach to content-based retrieval and a novel framework for classification of visual information, in which (1) users define their own visual classes and classifiers are learned automatically, and (multiple fuzzy-classifiers and machine learning techniques are combined for automatic classification at multiple levels (region, perceptual, object-part, object and scene). We present The Visual Apprentice, an implementation of our framework for still images and video that uses a combination of lazy-learning, decision trees, and evolution programs for classification and grouping. Our system is flexible, in that models can be changed by users over time, different types of classifiers are combined, and user-model definitions can be applied to object and scene structure classification. Special emphasis is placed on the difference between semantic and visual classes, and between classification and detection. Examples and results are presented to demonstrate the applicability of our approach to perform visual classification and detection.

  11. Knowledge-based classification of neuronal fibers in entire brain.

    PubMed

    Xia, Yan; Turken, U; Whitfield-Gabrieli, Susan L; Gabrieli, John D

    2005-01-01

    This work presents a framework driven by parcellation of brain gray matter in standard normalized space to classify the neuronal fibers obtained from diffusion tensor imaging (DTI) in entire human brain. Classification of fiber bundles into groups is an important step for the interpretation of DTI data in terms of functional correlates of white matter structures. Connections between anatomically delineated brain regions that are considered to form functional units, such as a short-term memory network, are identified by first clustering fibers based on their terminations in anatomically defined zones of gray matter according to Talairach Atlas, and then refining these groups based on geometric similarity criteria. Fiber groups identified this way can then be interpreted in terms of their functional properties using knowledge of functional neuroanatomy of individual brain regions specified in standard anatomical space, as provided by functional neuroimaging and brain lesion studies. PMID:16685847

  12. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  13. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  14. [Vegetation change in Shenzhen City based on NDVI change classification].

    PubMed

    Li, Yi-Jing; Zeng, Hui; Wel, Jian-Bing

    2008-05-01

    Based on the TM images of 1988 and 2003 as well as the land-use change survey data in 2004, the vegetation change in Shenzhen City was assessed by a NDVI (normalized difference vegetation index) change classification method, and the impacts from natural and social constraining factors were analyzed. The results showed that as a whole, the rapid urbanization in 1988-2003 had less impact on the vegetation cover in the City, but in its plain areas with low altitude, the vegetation cover degraded more obviously. The main causes of the localized ecological degradation were the invasion of built-ups to woods and orchards, land transformation from woods to orchards at the altitude of above 100 m, and low percentage of green land in some built-ups areas. In the future, the protection and construction of vegetation in Shenzhen should focus on strengthening the protection and restoration of remnant woods, trying to avoid the built-ups' expansion to woods and orchards where are better vegetation-covered, rectifying the unreasonable orchard constructions at the altitude of above 100 m, and consolidating the greenbelt construction inside the built-ups. It was considered that the NDVI change classification method could work well in efficiently uncovering the trend of macroscale vegetation change, and avoiding the effect of random noise in data. PMID:18655594

  15. Lung sound classification using cepstral-based statistical features.

    PubMed

    Sengupta, Nandini; Sahidullah, Md; Saha, Goutam

    2016-08-01

    Lung sounds convey useful information related to pulmonary pathology. In this paper, short-term spectral characteristics of lung sounds are studied to characterize the lung sounds for the identification of associated diseases. Motivated by the success of cepstral features in speech signal classification, we evaluate five different cepstral features to recognize three types of lung sounds: normal, wheeze and crackle. Subsequently for fast and efficient classification, we propose a new feature set computed from the statistical properties of cepstral coefficients. Experiments are conducted on a dataset of 30 subjects using the artificial neural network (ANN) as a classifier. Results show that the statistical features extracted from mel-frequency cepstral coefficients (MFCCs) of lung sounds outperform commonly used wavelet-based features as well as standard cepstral coefficients including MFCCs. Further, we experimentally optimize different control parameters of the proposed feature extraction algorithm. Finally, we evaluate the features for noisy lung sound recognition. We have found that our newly investigated features are more robust than existing features and show better recognition accuracy even in low signal-to-noise ratios (SNRs). PMID:27286184

  16. Texture-Based Automated Lithological Classification Using Aeromagenetic Anomaly Images

    USGS Publications Warehouse

    Shankar, Vivek

    2009-01-01

    This report consists of a thesis submitted to the faculty of the Department of Electrical and Computer Engineering, in partial fulfillment of the requirements for the degree of Master of Science, Graduate College, The University of Arizona, 2004 Aeromagnetic anomaly images are geophysical prospecting tools frequently used in the exploration of metalliferous minerals and hydrocarbons. The amplitude and texture content of these images provide a wealth of information to geophysicists who attempt to delineate the nature of the Earth's upper crust. These images prove to be extremely useful in remote areas and locations where the minerals of interest are concealed by basin fill. Typically, geophysicists compile a suite of aeromagnetic anomaly images, derived from amplitude and texture measurement operations, in order to obtain a qualitative interpretation of the lithological (rock) structure. Texture measures have proven to be especially capable of capturing the magnetic anomaly signature of unique lithological units. We performed a quantitative study to explore the possibility of using texture measures as input to a machine vision system in order to achieve automated classification of lithological units. This work demonstrated a significant improvement in classification accuracy over random guessing based on a priori probabilities. Additionally, a quantitative comparison between the performances of five classes of texture measures in their ability to discriminate lithological units was achieved.

  17. Computational hepatocellular carcinoma tumor grading based on cell nuclei classification

    PubMed Central

    Atupelage, Chamidu; Nagahashi, Hiroshi; Kimura, Fumikazu; Yamaguchi, Masahiro; Tokiya, Abe; Hashiguchi, Akinori; Sakamoto, Michiie

    2014-01-01

    Abstract. Hepatocellular carcinoma (HCC) is the most common histological type of primary liver cancer. HCC is graded according to the malignancy of the tissues. It is important to diagnose low-grade HCC tumors because these tissues have good prognosis. Image interpretation-based computer-aided diagnosis (CAD) systems have been developed to automate the HCC grading process. Generally, the HCC grade is determined by the characteristics of liver cell nuclei. Therefore, it is preferable that CAD systems utilize only liver cell nuclei for HCC grading. This paper proposes an automated HCC diagnosing method. In particular, it defines a pipeline-path that excludes nonliver cell nuclei in two consequent pipeline-modules and utilizes the liver cell nuclear features for HCC grading. The significance of excluding the nonliver cell nuclei for HCC grading is experimentally evaluated. Four categories of liver cell nuclear features were utilized for classifying the HCC tumors. Results indicated that nuclear texture is the dominant feature for HCC grading and others contribute to increase the classification accuracy. The proposed method was employed to classify a set of regions of interest selected from HCC whole slide images into five classes and resulted in a 95.97% correct classification rate. PMID:26158066

  18. Classification of knee arthropathy with accelerometer-based vibroarthrography.

    PubMed

    Moreira, Dinis; Silva, Joana; Correia, Miguel V; Massada, Marta

    2016-01-01

    One of the most common knee joint disorders is known as osteoarthritis which results from the progressive degeneration of cartilage and subchondral bone over time, affecting essentially elderly adults. Current evaluation techniques are either complex, expensive, invasive or simply fails into detection of small and progressive changes that occur within the knee. Vibroarthrography appeared as a new solution where the mechanical vibratory signals arising from the knee are recorded recurring only to an accelerometer and posteriorly analyzed enabling the differentiation between a healthy and an arthritic joint. In this study, a vibration-based classification system was created using a dataset with 92 healthy and 120 arthritic segments of knee joint signals collected from 19 healthy and 20 arthritic volunteers, evaluated with k-nearest neighbors and support vector machine classifiers. The best classification was obtained using the k-nearest neighbors classifier with only 6 time-frequency features with an overall accuracy of 89.8% and with a precision, recall and f-measure of 88.3%, 92.4% and 90.1%, respectively. Preliminary results showed that vibroarthrography can be a promising, non-invasive and low cost tool that could be used for screening purposes. Despite this encouraging results, several upgrades in the data collection process and analysis can be further implemented. PMID:27225550

  19. Performance modeling of feature-based classification in SAR imagery

    NASA Astrophysics Data System (ADS)

    Boshra, Michael; Bhanu, Bir

    1998-09-01

    We present a novel method for modeling the performance of a vote-based approach for target classification in SAR imagery. In this approach, the geometric locations of the scattering centers are used to represent 2D model views of a 3D target for a specific sensor under a given viewing condition (azimuth, depression and squint angles). Performance of such an approach is modeled in the presence of data uncertainty, occlusion, and clutter. The proposed method captures the structural similarity between model views, which plays an important role in determining the classification performance. In particular, performance would improve if the model views are dissimilar and vice versa. The method consists of the following steps. In the first step, given a bound on data uncertainty, model similarity is determined by finding feature correspondence in the space of relative translations between each pair of model views. In the second step, statistical analysis is carried out in the vote, occlusion and clutter space, in order to determine the probability of misclassifying each model view. In the third step, the misclassification probability is averaged for all model views to estimate the probability-of-correct- identification (PCI) plot as a function of occlusion and clutter rates. Validity of the method is demonstrated by comparing predicted PCI plots with ones that are obtained experimentally. Results are presented using both XPATCH and MSTAR SAR data.

  20. Multispectral image analysis of forest (grassland) fire based on agent

    NASA Astrophysics Data System (ADS)

    Guan, Jiaying; Li, Deren; Guan, Zequn

    2001-09-01

    Now the developing research of Agent can help operators to do the routine assignments, by which we can economize the precious resources and improve the real-time image analysis of the computers. This paper firstly makes a brief introduction of the Agent conception. Then we make some discussions about the multispectral images of a certain area, which is based on the concept of Agent. The main objects of this paper are inspections of forest (grassland) fire. The purpose of this paper is to propose three stages with which Agent could monitor the wildly areas and make decision automatically, without operators' intervention. First stage, if the value of pixels are more than a given threshold, Agent will give the operators an alarm and notify the operators that there are something happened; Second stage, analyze data and self-learning; Third stage, according to the database and knowledge database, Agents make decisions. As the decisions will be influenced by many factors, so some models, such as heat sources model, weather model, fire model, vegetation model are needed.

  1. Adding ecosystem function to agent-based land use models

    PubMed Central

    Yadav, V.; Del Grosso, S.J.; Parton, W.J.; Malanson, G.P.

    2015-01-01

    The objective of this paper is to examine issues in the inclusion of simulations of ecosystem functions in agent-based models of land use decision-making. The reasons for incorporating these simulations include local interests in land fertility and global interests in carbon sequestration. Biogeochemical models are needed in order to calculate such fluxes. The Century model is described with particular attention to the land use choices that it can encompass. When Century is applied to a land use problem the combinatorial choices lead to a potentially unmanageable number of simulation runs. Century is also parameter-intensive. Three ways of including Century output in agent-based models, ranging from separately calculated look-up tables to agents running Century within the simulation, are presented. The latter may be most efficient, but it moves the computing costs to where they are most problematic. Concern for computing costs should not be a roadblock. PMID:26191077

  2. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    NASA Astrophysics Data System (ADS)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  3. Classification of genes based on gene expression analysis

    NASA Astrophysics Data System (ADS)

    Angelova, M.; Myers, C.; Faith, J.

    2008-05-01

    Systems biology and bioinformatics are now major fields for productive research. DNA microarrays and other array technologies and genome sequencing have advanced to the point that it is now possible to monitor gene expression on a genomic scale. Gene expression analysis is discussed and some important clustering techniques are considered. The patterns identified in the data suggest similarities in the gene behavior, which provides useful information for the gene functionalities. We discuss measures for investigating the homogeneity of gene expression data in order to optimize the clustering process. We contribute to the knowledge of functional roles and regulation of E. coli genes by proposing a classification of these genes based on consistently correlated genes in expression data and similarities of gene expression patterns. A new visualization tool for targeted projection pursuit and dimensionality reduction of gene expression data is demonstrated.

  4. Classification of genes based on gene expression analysis

    SciTech Connect

    Angelova, M. Myers, C. Faith, J.

    2008-05-15

    Systems biology and bioinformatics are now major fields for productive research. DNA microarrays and other array technologies and genome sequencing have advanced to the point that it is now possible to monitor gene expression on a genomic scale. Gene expression analysis is discussed and some important clustering techniques are considered. The patterns identified in the data suggest similarities in the gene behavior, which provides useful information for the gene functionalities. We discuss measures for investigating the homogeneity of gene expression data in order to optimize the clustering process. We contribute to the knowledge of functional roles and regulation of E. coli genes by proposing a classification of these genes based on consistently correlated genes in expression data and similarities of gene expression patterns. A new visualization tool for targeted projection pursuit and dimensionality reduction of gene expression data is demonstrated.

  5. Classification and thermal history of petroleum based on light hydrocarbons

    NASA Astrophysics Data System (ADS)

    Thompson, K. F. M.

    1983-02-01

    Classifications of oils and kerogens are described. Two indices are employed, termed the Heptane and IsoheptaneValues, based on analyses of gasoline-range hydrocarbons. The indices assess degree of paraffinicity. and allow the definition of four types of oil: normal, mature, supermature, and biodegraded. The values of these indices measured in sediment extracts are a function of maximum attained temperature and of kerogen type. Aliphatic and aromatic kerogens are definable. Only the extracts of sediments bearing aliphatic kerogens having a specific thermal history are identical to the normal oils which form the largest group (41%) in the sample set. This group was evidently generated at subsurface temperatures of the order of 138°-149°C, (280°-300°F) defined under specific conditions of burial history. It is suggested that all other petroleums are transformation products of normal oils.

  6. EVA: Collaborative Distributed Learning Environment Based in Agents.

    ERIC Educational Resources Information Center

    Sheremetov, Leonid; Tellez, Rolando Quintero

    In this paper, a Web-based learning environment developed within the project called Virtual Learning Spaces (EVA, in Spanish) is presented. The environment is composed of knowledge, collaboration, consulting, experimentation, and personal spaces as a collection of agents and conventional software components working over the knowledge domains. All…

  7. An Agent-based Framework for Web Query Answering.

    ERIC Educational Resources Information Center

    Wang, Huaiqing; Liao, Stephen; Liao, Lejian

    2000-01-01

    Discusses discrepancies between user queries on the Web and the answers provided by information sources; proposes an agent-based framework for Web mining tasks; introduces an object-oriented deductive data model and a flexible query language; and presents a cooperative mechanism for query answering. (Author/LRW)

  8. Adding ecosystem function to agent-based land use models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this paper is to examine issues in the inclusion of simulations of ecosystem functions in agent-based models of land use decision-making. The reasons for incorporating these simulations include local interests in land fertility and global interests in carbon sequestration. Biogeoche...

  9. Modeling civil violence: An agent-based computational approach

    PubMed Central

    Epstein, Joshua M.

    2002-01-01

    This article presents an agent-based computational model of civil violence. Two variants of the civil violence model are presented. In the first a central authority seeks to suppress decentralized rebellion. In the second a central authority seeks to suppress communal violence between two warring ethnic groups. PMID:11997450

  10. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification. PMID:22097877