Science.gov

Sample records for agent based classification

  1. A Library Book Intelligence Classification System based on Multi-agent

    NASA Astrophysics Data System (ADS)

    Pengfei, Guo; Liangxian, Du; Junxia, Qi

    This paper introduces the concept of artificial intelligence into the administrative system of the library, and then gives the model of robot system in book classification based on multi-agent. The intelligent robot can recognize books' barcode automatically and here gives the classification algorithm according to the book classification of Chinese library. The algorithm can calculate the concrete position of the books, and relate with all similar books, thus the robot can put all congener books once without turning back.

  2. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  3. Mass classification in mammography with multi-agent based fusion of human and machine intelligence

    NASA Astrophysics Data System (ADS)

    Xi, Dongdong; Fan, Ming; Li, Lihua; Zhang, Juan; Shan, Yanna; Dai, Gang; Zheng, Bin

    2016-03-01

    Although the computer-aided diagnosis (CAD) system can be applied for classifying the breast masses, the effects of this method on improvement of the radiologist' accuracy for distinguishing malignant from benign lesions still remain unclear. This study provided a novel method to classify breast masses by integrating the intelligence of human and machine. In this research, 224 breast masses were selected in mammography from database of DDSM with Breast Imaging Reporting and Data System (BI-RADS) categories. Three observers (a senior and a junior radiologist, as well as a radiology resident) were employed to independently read and classify these masses utilizing the Positive Predictive Values (PPV) for each BI-RADS category. Meanwhile, a CAD system was also implemented for classification of these breast masses between malignant and benign. To combine the decisions from the radiologists and CAD, the fusion method of the Multi-Agent was provided. Significant improvements are observed for the fusion system over solely radiologist or CAD. The area under the receiver operating characteristic curve (AUC) of the fusion system increased by 9.6%, 10.3% and 21% compared to that of radiologists with senior, junior and resident level, respectively. In addition, the AUC of this method based on the fusion of each radiologist and CAD are 3.5%, 3.6% and 3.3% higher than that of CAD alone. Finally, the fusion of the three radiologists with CAD achieved AUC value of 0.957, which was 5.6% larger compared to CAD. Our results indicated that the proposed fusion method has better performance than radiologist or CAD alone.

  4. Multi-Agent Information Classification Using Dynamic Acquaintance Lists.

    ERIC Educational Resources Information Center

    Mukhopadhyay, Snehasis; Peng, Shengquan; Raje, Rajeev; Palakal, Mathew; Mostafa, Javed

    2003-01-01

    Discussion of automated information services focuses on information classification and collaborative agents, i.e. intelligent computer programs. Highlights include multi-agent systems; distributed artificial intelligence; thesauri; document representation and classification; agent modeling; acquaintances, or remote agents discovered through…

  5. PADMA: PArallel Data Mining Agents for scalable text classification

    SciTech Connect

    Kargupta, H.; Hamzaoglu, I.; Stafford, B.

    1997-03-01

    This paper introduces PADMA (PArallel Data Mining Agents), a parallel agent based system for scalable text classification. PADMA contains modules for (1) parallel data accessing operations, (2) parallel hierarchical clustering, and (3) web-based data visualization. This paper introduces the general architecture of PADMA and presents a detailed description of its different modules.

  6. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  7. Granular loess classification based

    SciTech Connect

    Browzin, B.S.

    1985-05-01

    This paper discusses how loess might be identified by two index properties: the granulometric composition and the dry unit weight. These two indices are necessary but not always sufficient for identification of loess. On the basis of analyses of samples from three continents, it was concluded that the 0.01-0.5-mm fraction deserves the name loessial fraction. Based on the loessial fraction concept, a granulometric classification of loess is proposed. A triangular chart is used to classify loess.

  8. Multi-agent Negotiation Mechanisms for Statistical Target Classification in Wireless Multimedia Sensor Networks

    PubMed Central

    Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng

    2007-01-01

    The recent availability of low cost and miniaturized hardware has allowed wireless sensor networks (WSNs) to retrieve audio and video data in real world applications, which has fostered the development of wireless multimedia sensor networks (WMSNs). Resource constraints and challenging multimedia data volume make development of efficient algorithms to perform in-network processing of multimedia contents imperative. This paper proposes solving problems in the domain of WMSNs from the perspective of multi-agent systems. The multi-agent framework enables flexible network configuration and efficient collaborative in-network processing. The focus is placed on target classification in WMSNs where audio information is retrieved by microphones. To deal with the uncertainties related to audio information retrieval, the statistical approaches of power spectral density estimates, principal component analysis and Gaussian process classification are employed. A multi-agent negotiation mechanism is specially developed to efficiently utilize limited resources and simultaneously enhance classification accuracy and reliability. The negotiation is composed of two phases, where an auction based approach is first exploited to allocate the classification task among the agents and then individual agent decisions are combined by the committee decision mechanism. Simulation experiments with real world data are conducted and the results show that the proposed statistical approaches and negotiation mechanism not only reduce memory and computation requirements in WMSNs but also significantly enhance classification accuracy and reliability.

  9. Classification-based reasoning

    NASA Technical Reports Server (NTRS)

    Gomez, Fernando; Segami, Carlos

    1991-01-01

    A representation formalism for N-ary relations, quantification, and definition of concepts is described. Three types of conditions are associated with the concepts: (1) necessary and sufficient properties, (2) contingent properties, and (3) necessary properties. Also explained is how complex chains of inferences can be accomplished by representing existentially quantified sentences, and concepts denoted by restrictive relative clauses as classification hierarchies. The representation structures that make possible the inferences are explained first, followed by the reasoning algorithms that draw the inferences from the knowledge structures. All the ideas explained have been implemented and are part of the information retrieval component of a program called Snowy. An appendix contains a brief session with the program.

  10. Optimal Infomation-based Classification

    NASA Astrophysics Data System (ADS)

    Hyun, Baro

    Classification is the allocation of an object to an existing category among several based on uncertain measurements. Since information is used to quantify uncertainty, it is natural to consider classification and information as complementary subjects. This dissertation touches upon several topics that relate to the problem of classification, such as information, classification, and team classification. Motivated by the U.S. Air Force Intelligence, Surveillance, and Reconnaissance missions, we investigate the aforementioned topics for classifiers that follow two models: classifiers with workload-independent and workload-dependent performance. We adopt workload-independence and dependence as "first-order" models to capture the features of machines and humans, respectively. We first investigate the relationship between information in the sense of Shannon and classification performance, which is defined as the probability of misclassification. We show that while there is a predominant congruence between them, there are cases when such congruence is violated. We show the phenomenon for both workload-independent and workload-dependent classifiers and investigate the cause of such phenomena analytically. One way of making classification decisions is by setting a threshold on a measured quantity. For instance, if a measurement falls on one side of the threshold, the object that provided the measurement is classified as one type, otherwise, it is of another type. Exploiting thresholding, we formalize a classifier with dichotomous decisions (i.e., with two options, such as true or false) given a single variable measurement. We further extend the formalization to classifiers with trichotomy (i.e., with three options, such as true, false or unknown) and with multivariate measurements. When a team of classifiers is considered, issues on how to exploit redundant numbers of classifiers arise. We analyze these classifiers under different architectures, such as parallel or nested

  11. Agent-Based Literacy Theory

    ERIC Educational Resources Information Center

    McEneaney, John E.

    2006-01-01

    The purpose of this theoretical essay is to explore the limits of traditional conceptualizations of reader and text and to propose a more general theory based on the concept of a literacy agent. The proposed theoretical perspective subsumes concepts from traditional theory and aims to account for literacy online. The agent-based literacy theory…

  12. Agent-based forward analysis

    SciTech Connect

    Kerekes, Ryan A.; Jiao, Yu; Shankar, Mallikarjun; Potok, Thomas E.; Lusk, Rick M.

    2008-01-01

    We propose software agent-based "forward analysis" for efficient information retrieval in a network of sensing devices. In our approach, processing is pushed to the data at the edge of the network via intelligent software agents rather than pulling data to a central facility for processing. The agents are deployed with a specific query and perform varying levels of analysis of the data, communicating with each other and sending only relevant information back across the network. We demonstrate our concept in the context of face recognition using a wireless test bed comprised of PDA cell phones and laptops. We show that agent-based forward analysis can provide a significant increase in retrieval speed while decreasing bandwidth usage and information overload at the central facility. n

  13. Projection Classification Based Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqiu; Li, Chen; Gao, Wenhua

    2015-05-01

    Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.

  14. Standoff lidar simulation for biological warfare agent detection, tracking, and classification

    NASA Astrophysics Data System (ADS)

    Jönsson, Erika; Steinvall, Ove; Gustafsson, Ove; Kullander, Fredrik; Jonsson, Per

    2010-04-01

    Lidar has been identified as a promising sensor for remote detection of biological warfare agents (BWA). Elastic IR lidar can be used for cloud detection at long ranges and UV laser induced fluorescence can be used for discrimination of BWA against naturally occurring aerosols. This paper will describe a simulation tool which enables the simulation of lidar for detection, tracking and classification of aerosol clouds. The cloud model was available from another project and has been integrated into the model. It takes into account the type of aerosol, type of release (plume or puff), amounts of BWA, winds, height above the ground and terrain roughness. The model input includes laser and receiver parameters for both the IR and UV channels as well as the optical parameters of the background, cloud and atmosphere. The wind and cloud conditions and terrain roughness are specified for the cloud simulation. The search area including the angular sampling resolution together with the IR laser pulse repetition frequency defines the search conditions. After cloud detection in the elastic mode, the cloud can be tracked using appropriate algorithms. In the tracking mode the classification using fluorescence spectral emission is simulated and tested using correlation against known spectra. Other methods for classification based on elastic backscatter are also discussed as well as the determination of particle concentration. The simulation estimates and displays the lidar response, cloud concentration as well as the goodness of fit for the classification using fluorescence.

  15. Review of therapeutic agents for burns pruritus and protocols for management in adult and paediatric patients using the GRADE classification

    PubMed Central

    Goutos, Ioannis; Clarke, Maria; Upson, Clara; Richardson, Patricia M.; Ghosh, Sudip J.

    2010-01-01

    To review the current evidence on therapeutic agents for burns pruritus and use the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) classification to propose therapeutic protocols for adult and paediatric patients. All published interventions for burns pruritus were analysed by a multidisciplinary panel of burns specialists following the GRADE classification to rate individual agents. Following the collation of results and panel discussion, consensus protocols are presented. Twenty-three studies appraising therapeutic agents in the burns literature were identified. The majority of these studies (16 out of 23) are of an observational nature, making an evidence-based approach to defining optimal therapy not feasible. Our multidisciplinary approach employing the GRADE classification recommends the use of antihistamines (cetirizine and cimetidine) and gabapentin as the first-line pharmacological agents for both adult and paediatric patients. Ondansetron and loratadine are the second-line medications in our protocols. We additionally recommend a variety of non-pharmacological adjuncts for the perusal of clinicians in order to maximise symptomatic relief in patients troubled with postburn itch. Most studies in the subject area lack sufficient statistical power to dictate a ‘gold standard’ treatment agent for burns itch. We encourage clinicians to employ the GRADE system in order to delineate the most appropriate therapeutic approach for burns pruritus until further research elucidates the most efficacious interventions. This widely adopted classification empowers burns clinicians to tailor therapeutic regimens according to current evidence, patient values, risks and resource considerations in different medical environments. PMID:21321658

  16. Agent Assignment for Process Management: Pattern Based Agent Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Jablonski, Stefan; Talib, Ramzan

    In almost all workflow management system the role concept is determined once at the introduction of workflow application and is not reevaluated to observe how successfully certain processes are performed by the authorized agents. This paper describes an approach which evaluates how agents are working successfully and feed this information back for future agent assignment to achieve maximum business benefit for the enterprise. The approach is called Pattern based Agent Performance Evaluation (PAPE) and is based on machine learning technique combined with post processing technique. We report on the result of our experiments and discuss issues and improvement of our approach.

  17. Orientation selectivity based structure for texture classification

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Lin, Weisi; Shi, Guangming; Zhang, Yazhong; Lu, Liu

    2014-10-01

    Local structure, e.g., local binary pattern (LBP), is widely used in texture classification. However, LBP is too sensitive to disturbance. In this paper, we introduce a novel structure for texture classification. Researches on cognitive neuroscience indicate that the primary visual cortex presents remarkable orientation selectivity for visual information extraction. Inspired by this, we investigate the orientation similarities among neighbor pixels, and propose an orientation selectivity based pattern for local structure description. Experimental results on texture classification demonstrate that the proposed structure descriptor is quite robust to disturbance.

  18. Nanoparticle-based theranostic agents

    PubMed Central

    Xie, Jin; Lee, Seulki; Chen, Xiaoyuan

    2010-01-01

    Theranostic nanomedicine is emerging as a promising therapeutic paradigm. It takes advantage of the high capacity of nanoplatforms to ferry cargo and loads onto them both imaging and therapeutic functions. The resulting nanosystems, capable of diagnosis, drug delivery and monitoring of therapeutic response, are expected to play a significant role in the dawning era of personalized medicine, and much research effort has been devoted toward that goal. A convenience in constructing such function-integrated agents is that many nanoplatforms are already, themselves, imaging agents. Their well developed surface chemistry makes it easy to load them with pharmaceutics and promote them to be theranostic nanosystems. Iron oxide nanoparticles, quantum dots, carbon nanotubes, gold nanoparticles and silica nanoparticles, have been previously well investigated in the imaging setting and are candidate nanoplatforms for building up nanoparticle-based theranostics. In the current article, we will outline the progress along this line, organized by the category of the core materials. We will focus on construction strategies and will discuss the challenges and opportunities associated with this emerging technology. PMID:20691229

  19. Agent-based enterprise integration

    SciTech Connect

    N. M. Berry; C. M. Pancerella

    1998-12-01

    The authors are developing and deploying software agents in an enterprise information architecture such that the agents manage enterprise resources and facilitate user interaction with these resources. The enterprise agents are built on top of a robust software architecture for data exchange and tool integration across heterogeneous hardware and software. The resulting distributed multi-agent system serves as a method of enhancing enterprises in the following ways: providing users with knowledge about enterprise resources and applications; accessing the dynamically changing enterprise; locating enterprise applications and services; and improving search capabilities for applications and data. Furthermore, agents can access non-agents (i.e., databases and tools) through the enterprise framework. The ultimate target of the effort is the user; they are attempting to increase user productivity in the enterprise. This paper describes their design and early implementation and discusses the planned future work.

  20. CATS-based Agents That Err

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.

    2002-01-01

    This report describes preliminary research on intelligent agents that make errors. Such agents are crucial to the development of novel agent-based techniques for assessing system safety. The agents extend an agent architecture derived from the Crew Activity Tracking System that has been used as the basis for air traffic controller agents. The report first reviews several error taxonomies. Next, it presents an overview of the air traffic controller agents, then details several mechanisms for causing the agents to err in realistic ways. The report presents a performance assessment of the error-generating agents, and identifies directions for further research. The research was supported by the System-Wide Accident Prevention element of the FAA/NASA Aviation Safety Program.

  1. Contour-based classification of video objects

    NASA Astrophysics Data System (ADS)

    Richter, Stephan; Kuehne, Gerald; Schuster, Oliver

    2000-12-01

    The recognition of objects that appear in a video sequence is an essential aspect of any video content analysis system. We present an approach which classifies a segmented video object base don its appearance in successive video frames. The classification is performed by matching curvature features of the contours of these object views to a database containing preprocessed views of prototypical objects using a modified curvature scale space technique. By integrating the result of an umber of successive frames and by using the modified curvature scale space technique as an efficient representation of object contours, our approach enables the robust, tolerant and rapid object classification of video objects.

  2. Contour-based classification of video objects

    NASA Astrophysics Data System (ADS)

    Richter, Stephan; Kuehne, Gerald; Schuster, Oliver

    2001-01-01

    The recognition of objects that appear in a video sequence is an essential aspect of any video content analysis system. We present an approach which classifies a segmented video object base don its appearance in successive video frames. The classification is performed by matching curvature features of the contours of these object views to a database containing preprocessed views of prototypical objects using a modified curvature scale space technique. By integrating the result of an umber of successive frames and by using the modified curvature scale space technique as an efficient representation of object contours, our approach enables the robust, tolerant and rapid object classification of video objects.

  3. Land classification based on hydrological landscape units

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Fenicia, F.; Hrachowitz, M.; Savenije, H. H. G.

    2011-05-01

    This paper presents a new type of hydrological landscape classification based on dominant runoff mechanisms. Three landscape classes are distinguished: wetland, hillslope and plateau, corresponding to three dominant hydrological regimes: saturation excess overland flow, storage excess sub-surface flow, and deep percolation. Topography, geology and land use hold the key to identifying these landscapes. The height above the nearest drain (HAND) and the surface slope, which can be readily obtained from a digital elevation model, appear to be the dominant topographical parameters for hydrological classification. In this paper several indicators for classification are tested as well as their sensitivity to scale and sample size. It appears that the best results are obtained by the simple use of HAND and slope. The results obtained compare well with field observations and the topographical wetness index. The new approach appears to be an efficient method to "read the landscape" on the basis of which conceptual models can be developed.

  4. Adaptive learning based heartbeat classification.

    PubMed

    Srinivas, M; Basil, Tony; Mohan, C Krishna

    2015-01-01

    Cardiovascular diseases (CVD) are a leading cause of unnecessary hospital admissions as well as fatalities placing an immense burden on the healthcare industry. A process to provide timely intervention can reduce the morbidity rate as well as control rising costs. Patients with cardiovascular diseases require quick intervention. Towards that end, automated detection of abnormal heartbeats captured by electronic cardiogram (ECG) signals is vital. While cardiologists can identify different heartbeat morphologies quite accurately among different patients, the manual evaluation is tedious and time consuming. In this chapter, we propose new features from the time and frequency domains and furthermore, feature normalization techniques to reduce inter-patient and intra-patient variations in heartbeat cycles. Our results using the adaptive learning based classifier emulate those reported in existing literature and in most cases deliver improved performance, while eliminating the need for labeling of signals by domain experts. PMID:26484555

  5. An agent based model of genotype editing

    SciTech Connect

    Rocha, L. M.; Huang, C. F.

    2004-01-01

    This paper presents our investigation on an agent-based model of Genotype Editing. This model is based on several characteristics that are gleaned from the RNA editing system as observed in several organisms. The incorporation of editing mechanisms in an evolutionary agent-based model provides a means for evolving agents with heterogenous post-transcriptional processes. The study of this agent-based genotype-editing model has shed some light into the evolutionary implications of RNA editing as well as established an advantageous evolutionary computation algorithm for machine learning. We expect that our proposed model may both facilitate determining the evolutionary role of RNA editing in biology, and advance the current state of research in agent-based optimization.

  6. Lightcurve Based Classification Of Transients Events

    NASA Astrophysics Data System (ADS)

    Donalek, Ciro; Graham, M. J.; Mahabal, A.; Djorgovski, S. G.; Drake, A. J.; Moghaddam, B.; Turmon, M.; Chen, Y.; Sharma, N.

    2012-01-01

    In many scientific fields, a new generation of instruments is generating exponentially growing data streams, that may enable significant new discoveries. The requirement to perform the analysis rapidly and objectively, coupled with the huge amount of data available, implies a need for an automated event detection, classification, and decision making. In astronomy, this is the case with the new generation of synoptic sky surveys, that discover an ever increasing number of transient events. However, not all of them are equally interesting and worthy of a follow-up with limited resources. This presents some unusual classification challenges: the data are sparse, heterogeneous and incomplete; evolving in time; and most of the relevant information comes from a variety of archival data and contextual information. We are exploring a variety of machine learning techniques, using the ongoing CRTS sky survey as a testbed: Bayesian Network, [dm,dt] histograms, Decision Trees, Neural Networks, Symbolic Regression. In this work we focus on the lightcurve based classification using an hierarchical approach where some astrophysically motivated major features are used to separate different groups of classes. Proceeding down the classification hierarchy every node uses those classifiers that are demonstrated to work best for that particular task.

  7. [Spectral classification based on Bayes decision].

    PubMed

    Liu, Rong; Jin, Hong-Mei; Duan, Fu-Qing

    2010-03-01

    The rapid development of astronomical observation has led to many large sky surveys such as SDSS (Sloan digital sky survey) and LAMOST (large sky area multi-object spectroscopic telescope). Since these surveys have produced very large numbers of spectra, automated spectral analysis becomes desirable and necessary. The present paper studies the spectral classification method based on Bayes decision theory, which divides spectra into three types: star, galaxy and quasar. Firstly, principal component analysis (PCA) is used in feature extraction, and spectra are projected into the 3D PCA feature space; secondly, the class conditional probability density functions are estimated using the non-parametric density estimation technique, Parzen window approach; finally, the minimum error Bayes decision rule is used for classification. In Parzen window approach, the kernel width affects the density estimation, and then affects the classification effect. Extensive experiments have been performed to analyze the relationship between the kernel widths and the correct classification rates. The authors found that the correct rate increases with the kernel width being close to some threshold, while it decreases with the kernel width being less than this threshold. PMID:20496722

  8. Detection and classification of organophosphate nerve agent simulants using support vector machines with multiarray sensors.

    PubMed

    Sadik, Omowunmi; Land, Walker H; Wanekaya, Adam K; Uematsu, Michiko; Embrechts, Mark J; Wong, Lut; Leibensperger, Dale; Volykin, Alex

    2004-01-01

    The need for rapid and accurate detection systems is expanding and the utilization of cross-reactive sensor arrays to detect chemical warfare agents in conjunction with novel computational techniques may prove to be a potential solution to this challenge. We have investigated the detection, prediction, and classification of various organophosphate (OP) nerve agent simulants using sensor arrays with a novel learning scheme known as support vector machines (SVMs). The OPs tested include parathion, malathion, dichlorvos, trichlorfon, paraoxon, and diazinon. A new data reduction software program was written in MATLAB V. 6.1 to extract steady-state and kinetic data from the sensor arrays. The program also creates training sets by mixing and randomly sorting any combination of data categories into both positive and negative cases. The resulting signals were fed into SVM software for "pairwise" and "one" vs all classification. Experimental results for this new paradigm show a significant increase in classification accuracy when compared to artificial neural networks (ANNs). Three kernels, the S2000, the polynomial, and the Gaussian radial basis function (RBF), were tested and compared to the ANN. The following measures of performance were considered in the pairwise classification: receiver operating curve (ROC) Az indices, specificities, and positive predictive values (PPVs). The ROC Az) values, specifities, and PPVs increases ranged from 5% to 25%, 108% to 204%, and 13% to 54%, respectively, in all OP pairs studied when compared to the ANN baseline. Dichlorvos, trichlorfon, and paraoxon were perfectly predicted. Positive prediction for malathion was 95%. PMID:15032529

  9. [Automatic classification method of star spectrum data based on classification pattern tree].

    PubMed

    Zhao, Xu-Jun; Cai, Jiang-Hui; Zhang, Ji-Fu; Yang, Hai-Feng; Ma, Yang

    2013-10-01

    Frequent pattern, frequently appearing in the data set, plays an important role in data mining. For the stellar spectrum classification tasks, a classification rule mining method based on classification pattern tree is presented on the basis of frequent pattern. The procedures can be shown as follows. Firstly, a new tree structure, i. e., classification pattern tree, is introduced based on the different frequencies of stellar spectral attributes in data base and its different importance used for classification. The related concepts and the construction method of classification pattern tree are also described in this paper. Then, the characteristics of the stellar spectrum are mapped to the classification pattern tree. Two modes of top-to-down and bottom-to-up are used to traverse the classification pattern tree and extract the classification rules. Meanwhile, the concept of pattern capability is introduced to adjust the number of classification rules and improve the construction efficiency of the classification pattern tree. Finally, the SDSS (the Sloan Digital Sky Survey) stellar spectral data provided by the National Astronomical Observatory are used to verify the accuracy of the method. The results show that a higher classification accuracy has been got. PMID:24409754

  10. Development of a rapid method for the automatic classification of biological agents' fluorescence spectral signatures

    NASA Astrophysics Data System (ADS)

    Carestia, Mariachiara; Pizzoferrato, Roberto; Gelfusa, Michela; Cenciarelli, Orlando; Ludovici, Gian Marco; Gabriele, Jessica; Malizia, Andrea; Murari, Andrea; Vega, Jesus; Gaudio, Pasquale

    2015-11-01

    Biosecurity and biosafety are key concerns of modern society. Although nanomaterials are improving the capacities of point detectors, standoff detection still appears to be an open issue. Laser-induced fluorescence of biological agents (BAs) has proved to be one of the most promising optical techniques to achieve early standoff detection, but its strengths and weaknesses are still to be fully investigated. In particular, different BAs tend to have similar fluorescence spectra due to the ubiquity of biological endogenous fluorophores producing a signal in the UV range, making data analysis extremely challenging. The Universal Multi Event Locator (UMEL), a general method based on support vector regression, is commonly used to identify characteristic structures in arrays of data. In the first part of this work, we investigate fluorescence emission spectra of different simulants of BAs and apply UMEL for their automatic classification. In the second part of this work, we elaborate a strategy for the application of UMEL to the discrimination of different BAs' simulants spectra. Through this strategy, it has been possible to discriminate between these BAs' simulants despite the high similarity of their fluorescence spectra. These preliminary results support the use of SVR methods to classify BAs' spectral signatures.

  11. Voxel classification based airway tree segmentation

    NASA Astrophysics Data System (ADS)

    Lo, Pechin; de Bruijne, Marleen

    2008-03-01

    This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.

  12. Texture feature based liver lesion classification

    NASA Astrophysics Data System (ADS)

    Doron, Yeela; Mayer-Wolf, Nitzan; Diamant, Idit; Greenspan, Hayit

    2014-03-01

    Liver lesion classification is a difficult clinical task. Computerized analysis can support clinical workflow by enabling more objective and reproducible evaluation. In this paper, we evaluate the contribution of several types of texture features for a computer-aided diagnostic (CAD) system which automatically classifies liver lesions from CT images. Based on the assumption that liver lesions of various classes differ in their texture characteristics, a variety of texture features were examined as lesion descriptors. Although texture features are often used for this task, there is currently a lack of detailed research focusing on the comparison across different texture features, or their combinations, on a given dataset. In this work we investigated the performance of Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), Gabor, gray level intensity values and Gabor-based LBP (GLBP), where the features are obtained from a given lesion`s region of interest (ROI). For the classification module, SVM and KNN classifiers were examined. Using a single type of texture feature, best result of 91% accuracy, was obtained with Gabor filtering and SVM classification. Combination of Gabor, LBP and Intensity features improved the results to a final accuracy of 97%.

  13. Ladar-based terrain cover classification

    NASA Astrophysics Data System (ADS)

    Macedo, Jose; Manduchi, Roberto; Matthies, Larry H.

    2001-09-01

    An autonomous vehicle driving in a densely vegetated environment needs to be able to discriminate between obstacles (such as rocks) and penetrable vegetation (such as tall grass). We propose a technique for terrain cover classification based on the statistical analysis of the range data produced by a single-axis laser rangefinder (ladar). We first present theoretical models for the range distribution in the presence of homogeneously distributed grass and of obstacles partially occluded by grass. We then validate our results with real-world cases, and propose a simple algorithm to robustly discriminate between vegetation and obstacles based on the local statistical analysis of the range data.

  14. Classification based on full decision trees

    NASA Astrophysics Data System (ADS)

    Genrikhov, I. E.; Djukova, E. V.

    2012-04-01

    The ideas underlying a series of the authors' studies dealing with the design of classification algorithms based on full decision trees are further developed. It is shown that the decision tree construction under consideration takes into account all the features satisfying a branching criterion. Full decision trees with an entropy branching criterion are studied as applied to precedent-based pattern recognition problems with real-valued data. Recognition procedures are constructed for solving problems with incomplete data (gaps in the feature descriptions of the objects) in the case when the learning objects are nonuniformly distributed over the classes. The authors' basic results previously obtained in this area are overviewed.

  15. Assurance in Agent-Based Systems

    SciTech Connect

    Gilliom, Laura R.; Goldsmith, Steven Y.

    1999-05-10

    Our vision of the future of information systems is one that includes engineered collectives of software agents which are situated in an environment over years and which increasingly improve the performance of the overall system of which they are a part. At a minimum, the movement of agent and multi-agent technology into National Security applications, including their use in information assurance, is apparent today. The use of deliberative, autonomous agents in high-consequence/high-security applications will require a commensurate level of protection and confidence in the predictability of system-level behavior. At Sandia National Laboratories, we have defined and are addressing a research agenda that integrates the surety (safety, security, and reliability) into agent-based systems at a deep level. Surety is addressed at multiple levels: The integrity of individual agents must be protected by addressing potential failure modes and vulnerabilities to malevolent threats. Providing for the surety of the collective requires attention to communications surety issues and mechanisms for identifying and working with trusted collaborators. At the highest level, using agent-based collectives within a large-scale distributed system requires the development of principled design methods to deliver the desired emergent performance or surety characteristics. This position paper will outline the research directions underway at Sandia, will discuss relevant work being performed elsewhere, and will report progress to date toward assurance in agent-based systems.

  16. Ecology Based Decentralized Agent Management System

    NASA Technical Reports Server (NTRS)

    Peysakhov, Maxim D.; Cicirello, Vincent A.; Regli, William C.

    2004-01-01

    The problem of maintaining a desired number of mobile agents on a network is not trivial, especially if we want a completely decentralized solution. Decentralized control makes a system more r e bust and less susceptible to partial failures. The problem is exacerbated on wireless ad hoc networks where host mobility can result in significant changes in the network size and topology. In this paper we propose an ecology-inspired approach to the management of the number of agents. The approach associates agents with living organisms and tasks with food. Agents procreate or die based on the abundance of uncompleted tasks (food). We performed a series of experiments investigating properties of such systems and analyzed their stability under various conditions. We concluded that the ecology based metaphor can be successfully applied to the management of agent populations on wireless ad hoc networks.

  17. Digital image-based classification of biodiesel.

    PubMed

    Costa, Gean Bezerra; Fernandes, David Douglas Sousa; Almeida, Valber Elias; Araújo, Thomas Souto Policarpo; Melo, Jessica Priscila; Diniz, Paulo Henrique Gonçalves Dias; Véras, Germano

    2015-07-01

    This work proposes a simple, rapid, inexpensive, and non-destructive methodology based on digital images and pattern recognition techniques for classification of biodiesel according to oil type (cottonseed, sunflower, corn, or soybean). For this, differing color histograms in RGB (extracted from digital images), HSI, Grayscale channels, and their combinations were used as analytical information, which was then statistically evaluated using Soft Independent Modeling by Class Analogy (SIMCA), Partial Least Squares Discriminant Analysis (PLS-DA), and variable selection using the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). Despite good performances by the SIMCA and PLS-DA classification models, SPA-LDA provided better results (up to 95% for all approaches) in terms of accuracy, sensitivity, and specificity for both the training and test sets. The variables selected Successive Projections Algorithm clearly contained the information necessary for biodiesel type classification. This is important since a product may exhibit different properties, depending on the feedstock used. Such variations directly influence the quality, and consequently the price. Moreover, intrinsic advantages such as quick analysis, requiring no reagents, and a noteworthy reduction (the avoidance of chemical characterization) of waste generation, all contribute towards the primary objective of green chemistry. PMID:25882407

  18. Integration of multi-array sensors and support vector machines for the detection and classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Sadik, Omowunmi A.; Embrechts, Mark J.; Leibensperger, Dale; Wong, Lut; Wanekaya, Adam; Uematsu, Michiko

    2003-08-01

    Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. Furthermore, recent events have highlighted awareness that chemical and biological agents (CBAs) may become the preferred, cheap alternative WMD, because these agents can effectively attack large populations while leaving infrastructures intact. Despite the availability of numerous sensing devices, intelligent hybrid sensors that can detect and degrade CBAs are virtually nonexistent. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using parathion and dichlorvos as model stimulants compounds. SVMs were used for the design and evaluation of new and more accurate data extraction, preprocessing and classification. Experimental results for the paradigms developed using Structural Risk Minimization, show a significant increase in classification accuracy when compared to the existing AromaScan baseline system. Specifically, the results of this research has demonstrated that, for the Parathion versus Dichlorvos pair, when compared to the AromaScan baseline system: (1) a 23% improvement in the overall ROC Az index using the S2000 kernel, with similar improvements with the Gaussian and polynomial (of degree 2) kernels, (2) a significant 173% improvement in specificity with the S2000 kernel. This means that the number of false negative errors were reduced by 173%, while making no false positive errors, when compared to the AromaScan base line performance. (3) The Gaussian and polynomial kernels demonstrated similar specificity at 100% sensitivity. All SVM classifiers provided essentially perfect classification performance for the Dichlorvos versus Trichlorfon pair. For the most difficult classification task, the Parathion versus

  19. Brain extraction based on locally linear representation-based classification.

    PubMed

    Huang, Meiyan; Yang, Wei; Jiang, Jun; Wu, Yao; Zhang, Yu; Chen, Wufan; Feng, Qianjin

    2014-05-15

    Brain extraction is an important procedure in brain image analysis. Although numerous brain extraction methods have been presented, enhancing brain extraction methods remains challenging because brain MRI images exhibit complex characteristics, such as anatomical variability and intensity differences across different sequences and scanners. To address this problem, we present a Locally Linear Representation-based Classification (LLRC) method for brain extraction. A novel classification framework is derived by introducing the locally linear representation to the classical classification model. Under this classification framework, a common label fusion approach can be considered as a special case and thoroughly interpreted. Locality is important to calculate fusion weights for LLRC; this factor is also considered to determine that Local Anchor Embedding is more applicable in solving locally linear coefficients compared with other linear representation approaches. Moreover, LLRC supplies a way to learn the optimal classification scores of the training samples in the dictionary to obtain accurate classification. The International Consortium for Brain Mapping and the Alzheimer's Disease Neuroimaging Initiative databases were used to build a training dataset containing 70 scans. To evaluate the proposed method, we used four publicly available datasets (IBSR1, IBSR2, LPBA40, and ADNI3T, with a total of 241 scans). Experimental results demonstrate that the proposed method outperforms the four common brain extraction methods (BET, BSE, GCUT, and ROBEX), and is comparable to the performance of BEaST, while being more accurate on some datasets compared with BEaST. PMID:24525169

  20. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  1. Cirrhosis Classification Based on Texture Classification of Random Features

    PubMed Central

    Shao, Ying; Guo, Dongmei; Zheng, Yuanjie; Zhao, Zuowei; Qiu, Tianshuang

    2014-01-01

    Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD) can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM) features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage). CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM. PMID:24707317

  2. Multimodal based classification of schizophrenia patients.

    PubMed

    Cetin, Mustafa S; Houck, Jon M; Vergara, Victor M; Miller, Robyn L; Calhoun, Vince

    2015-01-01

    Schizophrenia is currently diagnosed by physicians through clinical assessment and their evaluation of patient's self-reported experiences over the longitudinal course of the illness. There is great interest in identifying biologically based markers at the onset of illness, rather than relying on the evolution of symptoms across time. Functional network connectivity shows promise in providing individual subject predictive power. The majority of previous studies considered the analysis of functional connectivity during resting-state using only fMRI. However, exclusive reliance on fMRI to generate such networks, may limit inference on dysfunctional connectivity, which is hypothesized to underlie patient symptoms. In this work, we propose a framework for classification of schizophrenia patients and healthy control subjects based on using both fMRI and band limited envelope correlation metrics in MEG to interrogate functional network components in the resting state. Our results show that the combination of these two methods provide valuable information that captures fundamental characteristics of brain network connectivity in schizophrenia. Such information is useful for prediction of schizophrenia patients. Classification accuracy performance was improved significantly (up to ≈ 7%) relative to only the fMRI method and (up to ≈ 21%) relative to only the MEG method. PMID:26736831

  3. Patterns of Use of an Agent-Based Model and a System Dynamics Model: The Application of Patterns of Use and the Impacts on Learning Outcomes

    ERIC Educational Resources Information Center

    Thompson, Kate; Reimann, Peter

    2010-01-01

    A classification system that was developed for the use of agent-based models was applied to strategies used by school-aged students to interrogate an agent-based model and a system dynamics model. These were compared, and relationships between learning outcomes and the strategies used were also analysed. It was found that the classification system…

  4. Agent Based Modeling Applications for Geosciences

    NASA Astrophysics Data System (ADS)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  5. Graph-based Methods for Orbit Classification

    SciTech Connect

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  6. NISAC Agent Based Laboratory for Economics

    SciTech Connect

    Downes, Paula; Davis, Chris; Eidson, Eric; Ehlen, Mark; Gieseler, Charles; Harris, Richard

    2006-10-11

    The software provides large-scale microeconomic simulation of complex economic and social systems (such as supply chain and market dynamics of businesses in the US economy) and their dependence on physical infrastructure systems. The system is based on Agent simulation, where each entity of inteest in the system to be modeled (for example, a Bank, individual firms, Consumer households, etc.) is specified in a data-driven sense to be individually repreented by an Agent. The Agents interact using rules of interaction appropriate to their roles, and through those interactions complex economic and social dynamics emerge. The software is implemented in three tiers, a Java-based visualization client, a C++ control mid-tier, and a C++ computational tier.

  7. NISAC Agent Based Laboratory for Economics

    Energy Science and Technology Software Center (ESTSC)

    2006-10-11

    The software provides large-scale microeconomic simulation of complex economic and social systems (such as supply chain and market dynamics of businesses in the US economy) and their dependence on physical infrastructure systems. The system is based on Agent simulation, where each entity of inteest in the system to be modeled (for example, a Bank, individual firms, Consumer households, etc.) is specified in a data-driven sense to be individually repreented by an Agent. The Agents interactmore » using rules of interaction appropriate to their roles, and through those interactions complex economic and social dynamics emerge. The software is implemented in three tiers, a Java-based visualization client, a C++ control mid-tier, and a C++ computational tier.« less

  8. Text Classification Using ESC-Based Stochastic Decision Lists.

    ERIC Educational Resources Information Center

    Li, Hang; Yamanishi, Kenji

    2002-01-01

    Proposes a new method of text classification using stochastic decision lists, ordered sequences of IF-THEN-ELSE rules. The method can be viewed as a rule-based method for text classification having advantages of readability and refinability of acquired knowledge. Advantages of rule-based methods over non-rule-based ones are empirically verified.…

  9. Classification techniques based on AI application to defect classification in cast aluminum

    NASA Astrophysics Data System (ADS)

    Platero, Carlos; Fernandez, Carlos; Campoy, Pascual; Aracil, Rafael

    1994-11-01

    This paper describes the Artificial Intelligent techniques applied to the interpretation process of images from cast aluminum surface presenting different defects. The whole process includes on-line defect detection, feature extraction and defect classification. These topics are discussed in depth through the paper. Data preprocessing process, as well as segmentation and feature extraction are described. At this point, algorithms employed along with used descriptors are shown. Syntactic filter has been developed to modelate the information and to generate the input vector to the classification system. Classification of defects is achieved by means of rule-based systems, fuzzy models and neural nets. Different classification subsystems perform together for the resolution of a pattern recognition problem (hybrid systems). Firstly, syntactic methods are used to obtain the filter that reduces the dimension of the input vector to the classification process. Rule-based classification is achieved associating a grammar to each defect type; the knowledge-base will be formed by the information derived from the syntactic filter along with the inferred rules. The fuzzy classification sub-system uses production rules with fuzzy antecedent and their consequents are ownership rates to every defect type. Different architectures of neural nets have been implemented with different results, as shown along the paper. In the higher classification level, the information given by the heterogeneous systems as well as the history of the process is supplied to an Expert System in order to drive the casting process.

  10. Hydrological Land Classification Based on Landscape Units

    NASA Astrophysics Data System (ADS)

    Gharari, S.; hrachowitz, M.; Fenicia, F.; Savenije, H.

    2011-12-01

    Landscape classification in meaningful hydrological units has important implications for hydrological modeling. Conceptual hydrological models, such as HBV- type models, are most commonly designed to represent catchments in a lumped or semi-distributed way at best, i.e. treating them as single entities or sometimes accounting for topographical and land cover variability by introducing some level of stratification. These oversimplifications can frequently lead to substantial misrepresentations of flow generating processes in the catchments in question, as feedback processes between topography, land cover and hydrology in different landscape units are poorly represented. By making use of readily available topographical information, hydrological units can be identified based on the concept of ''Height above Nearest Drainage'' (HAND; Rennó et al., 2008). These units are characterized by distinct hydrological behavior, and they can be represented using different model structures (Savenije, 2010). We selected the Wark Catchment in Grand Duchy of Luxembourg and identified three landscape units: plateau, wetland and hillslope. The original HAND was compared to other, similar models for landscape classification, which make use of other topographical indicators. The models were applied to a 5±5 m2 DEM, and were tested using data collected in the field. The comparison between the models showed that HAND is a more appropriate hydrological descriptor than other models. The map of the classified landscape was set in a probabilistic framework and was then used to determine the proportion of the individual units in the catchment. Different model structures were then assigned to the individual units and were used to model total runoff.

  11. FIPA agent based network distributed control system

    SciTech Connect

    D. Abbott; V. Gyurjyan; G. Heyes; E. Jastrzembski; C. Timmer; E. Wolin

    2003-03-01

    A control system with the capabilities to combine heterogeneous control systems or processes into a uniform homogeneous environment is discussed. This dynamically extensible system is an example of the software system at the agent level of abstraction. This level of abstraction considers agents as atomic entities that communicate to implement the functionality of the control system. Agents' engineering aspects are addressed by adopting the domain independent software standard, formulated by FIPA. Jade core Java classes are used as a FIPA specification implementation. A special, lightweight, XML RDFS based, control oriented, ontology markup language is developed to standardize the description of the arbitrary control system data processor. Control processes, described in this language, are integrated into the global system at runtime, without actual programming. Fault tolerance and recovery issues are also addressed.

  12. Classification of CMEs Based on Their Dynamics

    NASA Astrophysics Data System (ADS)

    Nicewicz, J.; Michalek, G.

    2016-05-01

    A large set of coronal mass ejections CMEs (6621) has been selected to study their dynamics seen with the Large Angle and Spectroscopic Coronagraph (LASCO) onboard the Solar and Heliospheric Observatory (SOHO) field of view (LFOV). These events were selected based on having at least six height-time measurements so that their dynamic properties, in the LFOV, can be evaluated with reasonable accuracy. Height-time measurements (in the SOHO/LASCO catalog) were used to determine the velocities and accelerations of individual CMEs at successive distances from the Sun. Linear and quadratic functions were fitted to these data points. On the basis of the best fits to the velocity data points, we were able to classify CMEs into four groups. The types of CMEs do not only have different dynamic behaviors but also different masses, widths, velocities, and accelerations. We also show that these groups of events are initiated by different onset mechanisms. The results of our study allow us to present a consistent classification of CMEs based on their dynamics.

  13. Structure-based algorithms for microvessel classification

    PubMed Central

    Smith, Amy F.; Secomb, Timothy W.; Pries, Axel R.; Smith, Nicolas P.; Shipley, Rebecca J.

    2014-01-01

    Objective Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries and venules. PMID:25403335

  14. Multiscale agent-based consumer market modeling.

    SciTech Connect

    North, M. J.; Macal, C. M.; St. Aubin, J.; Thimmapuram, P.; Bragen, M.; Hahn, J.; Karr, J.; Brigham, N.; Lacy, M. E.; Hampton, D.; Decision and Information Sciences; Procter & Gamble Co.

    2010-05-01

    Consumer markets have been studied in great depth, and many techniques have been used to represent them. These have included regression-based models, logit models, and theoretical market-level models, such as the NBD-Dirichlet approach. Although many important contributions and insights have resulted from studies that relied on these models, there is still a need for a model that could more holistically represent the interdependencies of the decisions made by consumers, retailers, and manufacturers. When the need is for a model that could be used repeatedly over time to support decisions in an industrial setting, it is particularly critical. Although some existing methods can, in principle, represent such complex interdependencies, their capabilities might be outstripped if they had to be used for industrial applications, because of the details this type of modeling requires. However, a complementary method - agent-based modeling - shows promise for addressing these issues. Agent-based models use business-driven rules for individuals (e.g., individual consumer rules for buying items, individual retailer rules for stocking items, or individual firm rules for advertizing items) to determine holistic, system-level outcomes (e.g., to determine if brand X's market share is increasing). We applied agent-based modeling to develop a multi-scale consumer market model. We then conducted calibration, verification, and validation tests of this model. The model was successfully applied by Procter & Gamble to several challenging business problems. In these situations, it directly influenced managerial decision making and produced substantial cost savings.

  15. Classification

    NASA Astrophysics Data System (ADS)

    Oza, Nikunj

    2012-03-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. A set of training examples— examples with known output values—is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate’s measurements. The generalization performance of a learned model (how closely the target outputs and the model’s predicted outputs agree for patterns that have not been presented to the learning algorithm) would provide an indication of how well the model has learned the desired mapping. More formally, a classification learning algorithm L takes a training set T as its input. The training set consists of |T| examples or instances. It is assumed that there is a probability distribution D from which all training examples are drawn independently—that is, all the training examples are independently and identically distributed (i.i.d.). The ith training example is of the form (x_i, y_i), where x_i is a vector of values of several features and y_i represents the class to be predicted.* In the sunspot classification example given above, each training example

  16. Agent Based Intelligence in a Tetrahedral Rover

    NASA Technical Reports Server (NTRS)

    Phelps, Peter; Truszkowski, Walt

    2007-01-01

    A tetrahedron is a 4-node 6-strut pyramid structure which is being used by the NASA - Goddard Space Flight Center as the basic building block for a new approach to robotic motion. The struts are extendable; it is by the sequence of activities: strut-extension, changing the center of gravity and falling that the tetrahedron "moves". Currently, strut-extension is handled by human remote control. There is an effort underway to make the movement of the tetrahedron autonomous, driven by an attempt to achieve a goal. The approach being taken is to associate an intelligent agent with each node. Thus, the autonomous tetrahedron is realized as a constrained multi-agent system, where the constraints arise from the fact that between any two agents there is an extendible strut. The hypothesis of this work is that, by proper composition of such automated tetrahedra, robotic structures of various levels of complexity can be developed which will support more complex dynamic motions. This is the basis of the new approach to robotic motion which is under investigation. A Java-based simulator for the single tetrahedron, realized as a constrained multi-agent system, has been developed and evaluated. This paper reports on this project and presents a discussion of the structure and dynamics of the simulator.

  17. Agent-Based Modeling in Systems Pharmacology.

    PubMed

    Cosgrove, J; Butler, J; Alden, K; Read, M; Kumar, V; Cucurull-Sanchez, L; Timmis, J; Coles, M

    2015-11-01

    Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent-based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM-specific strengths have yielded success in the area of preclinical mechanistic modeling. PMID:26783498

  18. Hyperspectral imagery classification based on relevance vector machines

    NASA Astrophysics Data System (ADS)

    Yang, Guopeng; Yu, Xuchu; Feng, Wufa; Xu, Weixiao; Zhang, Pengqiang

    2009-10-01

    The relevance vector machine is sparse model in the Bayesian framework, its mathematics model doesn't have regularization coefficient and its kernel functions don't need to satisfy Mercer's condition. RVM present the good generalization performance, and its predictions are probabilistic. In this paper, a hyperspectral imagery classification method based on the relevance machine is brought forward. We introduce the sparse Bayesian classification model, regard the RVM learning as the maximization of marginal likelihood, and select the fast sequential sparse Bayesian learning algorithm. Through the experiment of PHI imagery classification, the advantages of the relevance machine used in hyperspectral imagery classification are given out.

  19. CATS-based Air Traffic Controller Agents

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.

    2002-01-01

    This report describes intelligent agents that function as air traffic controllers. Each agent controls traffic in a single sector in real time; agents controlling traffic in adjoining sectors can coordinate to manage an arrival flow across a given meter fix. The purpose of this research is threefold. First, it seeks to study the design of agents for controlling complex systems. In particular, it investigates agent planning and reactive control functionality in a dynamic environment in which a variety perceptual and decision making skills play a central role. It examines how heuristic rules can be applied to model planning and decision making skills, rather than attempting to apply optimization methods. Thus, the research attempts to develop intelligent agents that provide an approximation of human air traffic controller behavior that, while not based on an explicit cognitive model, does produce task performance consistent with the way human air traffic controllers operate. Second, this research sought to extend previous research on using the Crew Activity Tracking System (CATS) as the basis for intelligent agents. The agents use a high-level model of air traffic controller activities to structure the control task. To execute an activity in the CATS model, according to the current task context, the agents reference a 'skill library' and 'control rules' that in turn execute the pattern recognition, planning, and decision-making required to perform the activity. Applying the skills enables the agents to modify their representation of the current control situation (i.e., the 'flick' or 'picture'). The updated representation supports the next activity in a cycle of action that, taken as a whole, simulates air traffic controller behavior. A third, practical motivation for this research is to use intelligent agents to support evaluation of new air traffic control (ATC) methods to support new Air Traffic Management (ATM) concepts. Current approaches that use large, human

  20. Preliminary Research on Grassland Fine-classification Based on MODIS

    NASA Astrophysics Data System (ADS)

    Hu, Z. W.; Zhang, S.; Yu, X. Y.; Wang, X. S.

    2014-03-01

    Grassland ecosystem is important for climatic regulation, maintaining the soil and water. Research on the grassland monitoring method could provide effective reference for grassland resource investigation. In this study, we used the vegetation index method for grassland classification. There are several types of climate in China. Therefore, we need to use China's Main Climate Zone Maps and divide the study region into four climate zones. Based on grassland classification system of the first nation-wide grass resource survey in China, we established a new grassland classification system which is only suitable for this research. We used MODIS images as the basic data resources, and use the expert classifier method to perform grassland classification. Based on the 1:1,000,000 Grassland Resource Map of China, we obtained the basic distribution of all the grassland types and selected 20 samples evenly distributed in each type, then used NDVI/EVI product to summarize different spectral features of different grassland types. Finally, we introduced other classification auxiliary data, such as elevation, accumulate temperature (AT), humidity index (HI) and rainfall. China's nation-wide grassland classification map is resulted by merging the grassland in different climate zone. The overall classification accuracy is 60.4%. The result indicated that expert classifier is proper for national wide grassland classification, but the classification accuracy need to be improved.

  1. A Curriculum-Based Classification System for Community Colleges.

    ERIC Educational Resources Information Center

    Schuyler, Gwyer

    2003-01-01

    Proposes and tests a community college classification system based on curricular characteristics and their association with institutional characteristics. Seeks readily available data correlates to represent percentage of a college's course offerings that are in the liberal arts. A simple two-category classification system using total enrollment…

  2. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  3. Classification

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2011-01-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. In supervised learning, a set of training examples---examples with known output values---is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate's measurements. This chapter discusses methods to perform machine learning, with examples involving astronomy.

  4. Robust spike classification based on frequency domain neural waveform features

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Yuan, Yuan; Si, Jennie

    2013-12-01

    Objective. We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. Approach. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. Main results. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. Significance. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm

  5. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  6. Behavior Based Social Dimensions Extraction for Multi-Label Classification.

    PubMed

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes' behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes' connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  7. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  8. Error Generation in CATS-Based Agents

    NASA Technical Reports Server (NTRS)

    Callantine, Todd

    2003-01-01

    This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.

  9. Tensor Modeling Based for Airborne LiDAR Data Classification

    NASA Astrophysics Data System (ADS)

    Li, N.; Liu, C.; Pfeifer, N.; Yin, J. F.; Liao, Z. Y.; Zhou, Y.

    2016-06-01

    Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the "raw" data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  10. Better image texture recognition based on SVM classification

    NASA Astrophysics Data System (ADS)

    Liu, Kuan; Lu, Bin; Wei, Yaxun

    2013-10-01

    Texture classification is very important in remote sensing images, X-ray photos, cell image interpretation and processing, and is also the active research areas of computer vision, image processing, image analysis, image retrieval, and so on. As to spatial domain image, texture analysis can use statistical methods to calculate the texture feature vector. In this paper, we use the gray level co-occurrence matrix and Gabor filter feature vector to calculate the feature vector. For the feature vector classification under normal circumstances we can use Bayesian method, KNN method, BP neural network. In this paper, we use a statistical classification method which is based on SVM method to classify images. Image classification generally includes image preprocessing, image feature extraction, image feature selection and image classification in four steps. In this paper, we use a gray-scale image, by calculating the image gray level cooccurrence matrix and Gabor filtering method to get feature extraction, and then use SVM to training and classification. From the test results, it shows that the SVM method is the better way to solve the problem of texture features for image classification and it shows strong adaptability and robustness for image classification.

  11. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection. PMID:26353275

  12. A Classification-based Review Recommender

    NASA Astrophysics Data System (ADS)

    O'Mahony, Michael P.; Smyth, Barry

    Many online stores encourage their users to submit product/service reviews in order to guide future purchasing decisions. These reviews are often listed alongside product recommendations but, to date, limited attention has been paid as to how best to present these reviews to the end-user. In this paper, we describe a supervised classification approach that is designed to identify and recommend the most helpful product reviews. Using the TripAdvisor service as a case study, we compare the performance of several classification techniques using a range of features derived from hotel reviews. We then describe how these classifiers can be used as the basis for a practical recommender that automatically suggests the mosthelpful contrasting reviews to end-users. We present an empirical evaluation which shows that our approach achieves a statistically significant improvement over alternative review ranking schemes.

  13. Classification

    ERIC Educational Resources Information Center

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  14. Epiretinal membrane: optical coherence tomography-based diagnosis and classification.

    PubMed

    Stevenson, William; Prospero Ponce, Claudia M; Agarwal, Daniel R; Gelman, Rachel; Christoforidis, John B

    2016-01-01

    Epiretinal membrane (ERM) is a disorder of the vitreomacular interface characterized by symptoms of decreased visual acuity and metamorphopsia. The diagnosis and classification of ERM has traditionally been based on clinical examination findings. However, modern optical coherence tomography (OCT) has proven to be more sensitive than clinical examination for the diagnosis of ERM. Furthermore, OCT-derived findings, such as central foveal thickness and inner segment ellipsoid band integrity, have shown clinical relevance in the setting of ERM. To date, no OCT-based ERM classification scheme has been widely accepted for use in clinical practice and investigation. Herein, we review the pathogenesis, diagnosis, and classification of ERMs and propose an OCT-based ERM classification system. PMID:27099458

  15. Epiretinal membrane: optical coherence tomography-based diagnosis and classification

    PubMed Central

    Stevenson, William; Prospero Ponce, Claudia M; Agarwal, Daniel R; Gelman, Rachel; Christoforidis, John B

    2016-01-01

    Epiretinal membrane (ERM) is a disorder of the vitreomacular interface characterized by symptoms of decreased visual acuity and metamorphopsia. The diagnosis and classification of ERM has traditionally been based on clinical examination findings. However, modern optical coherence tomography (OCT) has proven to be more sensitive than clinical examination for the diagnosis of ERM. Furthermore, OCT-derived findings, such as central foveal thickness and inner segment ellipsoid band integrity, have shown clinical relevance in the setting of ERM. To date, no OCT-based ERM classification scheme has been widely accepted for use in clinical practice and investigation. Herein, we review the pathogenesis, diagnosis, and classification of ERMs and propose an OCT-based ERM classification system. PMID:27099458

  16. EXTENDING AQUATIC CLASSIFICATION TO THE LANDSCAPE SCALE HYDROLOGY-BASED STRATEGIES

    EPA Science Inventory

    Aquatic classification of single water bodies (lakes, wetlands, estuaries) is often based on geologic origin, while stream classification has relied on multiple factors related to landform, geomorphology, and soils. We have developed an approach to aquatic classification based o...

  17. Agent Based Modeling as an Educational Tool

    NASA Astrophysics Data System (ADS)

    Fuller, J. H.; Johnson, R.; Castillo, V.

    2012-12-01

    Motivation is a key element in high school education. One way to improve motivation and provide content, while helping address critical thinking and problem solving skills, is to have students build and study agent based models in the classroom. This activity visually connects concepts with their applied mathematical representation. "Engaging students in constructing models may provide a bridge between frequently disconnected conceptual and mathematical forms of knowledge." (Levy and Wilensky, 2011) We wanted to discover the feasibility of implementing a model based curriculum in the classroom given current and anticipated core and content standards.; Simulation using California GIS data ; Simulation of high school student lunch popularity using aerial photograph on top of terrain value map.

  18. Comparison and Analysis of Biological Agent Category Lists Based On Biosafety and Biodefense

    PubMed Central

    Tian, Deqiao; Zheng, Tao

    2014-01-01

    Biological agents pose a serious threat to human health, economic development, social stability and even national security. The classification of biological agents is a basic requirement for both biosafety and biodefense. We compared and analyzed the Biological Agent Laboratory Biosafety Category list and the defining criteria according to the World Health Organization (WHO), the National Institutes of Health (NIH), the European Union (EU) and China. We also compared and analyzed the Biological Agent Biodefense Category list and the defining criteria according to the Centers for Disease Control and Prevention (CDC) of the United States, the EU and Russia. The results show some inconsistencies among or between the two types of category lists and criteria. We suggest that the classification of biological agents based on laboratory biosafety should reduce the number of inconsistencies and contradictions. Developing countries should also produce lists of biological agents to direct their development of biodefense capabilities.To develop a suitable biological agent list should also strengthen international collaboration and cooperation. PMID:24979754

  19. Comparison and analysis of biological agent category lists based on biosafety and biodefense.

    PubMed

    Tian, Deqiao; Zheng, Tao

    2014-01-01

    Biological agents pose a serious threat to human health, economic development, social stability and even national security. The classification of biological agents is a basic requirement for both biosafety and biodefense. We compared and analyzed the Biological Agent Laboratory Biosafety Category list and the defining criteria according to the World Health Organization (WHO), the National Institutes of Health (NIH), the European Union (EU) and China. We also compared and analyzed the Biological Agent Biodefense Category list and the defining criteria according to the Centers for Disease Control and Prevention (CDC) of the United States, the EU and Russia. The results show some inconsistencies among or between the two types of category lists and criteria. We suggest that the classification of biological agents based on laboratory biosafety should reduce the number of inconsistencies and contradictions. Developing countries should also produce lists of biological agents to direct their development of biodefense capabilities.To develop a suitable biological agent list should also strengthen international collaboration and cooperation. PMID:24979754

  20. Agent-based models of financial markets

    NASA Astrophysics Data System (ADS)

    Samanidou, E.; Zschischang, E.; Stauffer, D.; Lux, T.

    2007-03-01

    This review deals with several microscopic ('agent-based') models of financial markets which have been studied by economists and physicists over the last decade: Kim-Markowitz, Levy-Levy-Solomon, Cont-Bouchaud, Solomon-Weisbuch, Lux-Marchesi, Donangelo-Sneppen and Solomon-Levy-Huang. After an overview of simulation approaches in financial economics, we first give a summary of the Donangelo-Sneppen model of monetary exchange and compare it with related models in economics literature. Our selective review then outlines the main ingredients of some influential early models of multi-agent dynamics in financial markets (Kim-Markowitz, Levy-Levy-Solomon). As will be seen, these contributions draw their inspiration from the complex appearance of investors' interactions in real-life markets. Their main aim is to reproduce (and, thereby, provide possible explanations) for the spectacular bubbles and crashes seen in certain historical episodes, but they lack (like almost all the work before 1998 or so) a perspective in terms of the universal statistical features of financial time series. In fact, awareness of a set of such regularities (power-law tails of the distribution of returns, temporal scaling of volatility) only gradually appeared over the nineties. With the more precise description of the formerly relatively vague characteristics (e.g. moving from the notion of fat tails to the more concrete one of a power law with index around three), it became clear that financial market dynamics give rise to some kind of universal scaling law. Showing similarities with scaling laws for other systems with many interacting sub-units, an exploration of financial markets as multi-agent systems appeared to be a natural consequence. This topic has been pursued by quite a number of contributions appearing in both the physics and economics literature since the late nineties. From the wealth of different flavours of multi-agent models that have appeared up to now, we discuss the Cont

  1. Agent-based modeling in ecological economics.

    PubMed

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems. PMID:20146761

  2. Agent Based Model of Livestock Movements

    NASA Astrophysics Data System (ADS)

    Miron, D. J.; Emelyanova, I. V.; Donald, G. E.; Garner, G. M.

    The modelling of livestock movements within Australia is of national importance for the purposes of the management and control of exotic disease spread, infrastructure development and the economic forecasting of livestock markets. In this paper an agent based model for the forecasting of livestock movements is presented. This models livestock movements from farm to farm through a saleyard. The decision of farmers to sell or buy cattle is often complex and involves many factors such as climate forecast, commodity prices, the type of farm enterprise, the number of animals available and associated off-shore effects. In this model the farm agent's intelligence is implemented using a fuzzy decision tree that utilises two of these factors. These two factors are the livestock price fetched at the last sale and the number of stock on the farm. On each iteration of the model farms choose either to buy, sell or abstain from the market thus creating an artificial supply and demand. The buyers and sellers then congregate at the saleyard where livestock are auctioned using a second price sealed bid. The price time series output by the model exhibits properties similar to those found in real livestock markets.

  3. Agent-based modeling of complex infrastructures

    SciTech Connect

    North, M. J.

    2001-06-01

    Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.

  4. Spatial prior in SVM-based classification of brain images

    NASA Astrophysics Data System (ADS)

    Cuingnet, Rémi; Chupin, Marie; Benali, Habib; Colliot, Olivier

    2010-03-01

    This paper introduces a general framework for spatial prior in SVM-based classification of brain images based on Laplacian regularization. Most existing methods include spatial prior by adding a feature aggregation step before the SVM classification. The problem of the aggregation step is that the individual information of each feature is lost. Our framework enables to avoid this shortcoming by including the spatial prior directly in the SVM. We demonstrate that this framework can be used to derive embedded regularization corresponding to existing methods for classification of brain images and propose an efficient way to implement them. This framework is illustrated on the classification of MR images from 55 patients with Alzheimer's disease and 82 elderly controls selected from the ADNI database. The results demonstrate that the proposed algorithm enables introducing straightforward and anatomically consistent spatial prior into the classifier.

  5. A Human Gait Classification Method Based on Radar Doppler Spectrograms

    NASA Astrophysics Data System (ADS)

    Tivive, Fok Hing Chi; Bouzerdoum, Abdesselam; Amin, Moeness G.

    2010-12-01

    An image classification technique, which has recently been introduced for visual pattern recognition, is successfully applied for human gait classification based on radar Doppler signatures depicted in the time-frequency domain. The proposed method has three processing stages. The first two stages are designed to extract Doppler features that can effectively characterize human motion based on the nature of arm swings, and the third stage performs classification. Three types of arm motion are considered: free-arm swings, one-arm confined swings, and no-arm swings. The last two arm motions can be indicative of a human carrying objects or a person in stressed situations. The paper discusses the different steps of the proposed method for extracting distinctive Doppler features and demonstrates their contributions to the final and desirable classification rates.

  6. Bazhenov Fm Classification Based on Wireline Logs

    NASA Astrophysics Data System (ADS)

    Simonov, D. A.; Baranov, V.; Bukhanov, N.

    2016-03-01

    This paper considers the main aspects of Bazhenov Formation interpretation and application of machine learning algorithms for the Kolpashev type section of the Bazhenov Formation, application of automatic classification algorithms that would change the scale of research from small to large. Machine learning algorithms help interpret the Bazhenov Formation in a reference well and in other wells. During this study, unsupervised and supervised machine learning algorithms were applied to interpret lithology and reservoir properties. This greatly simplifies the routine problem of manual interpretation and has an economic effect on the cost of laboratory analysis.

  7. Wavelet-based asphalt concrete texture grading and classification

    NASA Astrophysics Data System (ADS)

    Almuntashri, Ali; Agaian, Sos

    2011-03-01

    In this Paper, we introduce a new method for evaluation, quality control, and automatic grading of texture images representing different textural classes of Asphalt Concrete (AC). Also, we present a new asphalt concrete texture grading, wavelet transform, fractal, and Support Vector Machine (SVM) based automatic classification and recognition system. Experimental results were simulated using different cross-validation techniques and achieved an average classification accuracy of 91.4.0 % in a set of 150 images belonging to five different texture grades.

  8. Improvement of unsupervised texture classification based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Togami, Yuuki; Arai, Kohei

    2004-11-01

    At the previous conference, the authors are proposed a new unsupervised texture classification method based on the genetic algorithms (GA). In the method, the GA are employed to determine location and size of the typical textures in the target image. The proposed method consists of the following procedures: 1) the determination of the number of classification category; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed; 6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. However, this method has not been automated because it requires not only target image but also the number of categories for classification. In this paper, we describe some improvement for implementation of automated texture classification. Some experiments are conducted to evaluate classification capability of the proposed method by using images from Brodatz's photo album and actual airborne multispectral scanner. The experimental results show that the proposed method can select appropriate texture samples and can provide reasonable classification results.

  9. Ebolavirus classification based on natural vectors.

    PubMed

    Zheng, Hui; Yin, Changchuan; Hoang, Tung; He, Rong Lucy; Yang, Jie; Yau, Stephen S-T

    2015-06-01

    According to the WHO, ebolaviruses have resulted in 8818 human deaths in West Africa as of January 2015. To better understand the evolutionary relationship of the ebolaviruses and infer virulence from the relationship, we applied the alignment-free natural vector method to classify the newest ebolaviruses. The dataset includes three new Guinea viruses as well as 99 viruses from Sierra Leone. For the viruses of the family of Filoviridae, both genus label classification and species label classification achieve an accuracy rate of 100%. We represented the relationships among Filoviridae viruses by Unweighted Pair Group Method with Arithmetic Mean (UPGMA) phylogenetic trees and found that the filoviruses can be separated well by three genera. We performed the phylogenetic analysis on the relationship among different species of Ebolavirus by their coding-complete genomes and seven viral protein genes (glycoprotein [GP], nucleoprotein [NP], VP24, VP30, VP35, VP40, and RNA polymerase [L]). The topology of the phylogenetic tree by the viral protein VP24 shows consistency with the variations of virulence of ebolaviruses. The result suggests that VP24 be a pharmaceutical target for treating or preventing ebolaviruses. PMID:25803489

  10. From Agents to Continuous Change via Aesthetics: Learning Mechanics with Visual Agent-Based Computational Modeling

    ERIC Educational Resources Information Center

    Sengupta, Pratim; Farris, Amy Voss; Wright, Mason

    2012-01-01

    Novice learners find motion as a continuous process of change challenging to understand. In this paper, we present a pedagogical approach based on agent-based, visual programming to address this issue. Integrating agent-based programming, in particular, Logo programming, with curricular science has been shown to be challenging in previous research…

  11. Knowledge Management in Role Based Agents

    NASA Astrophysics Data System (ADS)

    Kır, Hüseyin; Ekinci, Erdem Eser; Dikenelli, Oguz

    In multi-agent system literature, the role concept is getting increasingly researched to provide an abstraction to scope beliefs, norms, goals of agents and to shape relationships of the agents in the organization. In this research, we propose a knowledgebase architecture to increase applicability of roles in MAS domain by drawing inspiration from the self concept in the role theory of sociology. The proposed knowledgebase architecture has granulated structure that is dynamically organized according to the agent's identification in a social environment. Thanks to this dynamic structure, agents are enabled to work on consistent knowledge in spite of inevitable conflicts between roles and the agent. The knowledgebase architecture is also implemented and incorporated into the SEAGENT multi-agent system development framework.

  12. Who's your neighbor? neighbor identification for agent-based modeling.

    SciTech Connect

    Macal, C. M.; Howe, T. R.; Decision and Information Sciences; Univ. of Chicago

    2006-01-01

    Agent-based modeling and simulation, based on the cellular automata paradigm, is an approach to modeling complex systems comprised of interacting autonomous agents. Open questions in agent-based simulation focus on scale-up issues encountered in simulating large numbers of agents. Specifically, how many agents can be included in a workable agent-based simulation? One of the basic tenets of agent-based modeling and simulation is that agents only interact and exchange locally available information with other agents located in their immediate proximity or neighborhood of the space in which the agents are situated. Generally, an agent's set of neighbors changes rapidly as a simulation proceeds through time and as the agents move through space. Depending on the topology defined for agent interactions, proximity may be defined by spatial distance for continuous space, adjacency for grid cells (as in cellular automata), or by connectivity in social networks. Identifying an agent's neighbors is a particularly time-consuming computational task and can dominate the computational effort in a simulation. Two challenges in agent simulation are (1) efficiently representing an agent's neighborhood and the neighbors in it and (2) efficiently identifying an agent's neighbors at any time in the simulation. These problems are addressed differently for different agent interaction topologies. While efficient approaches have been identified for agent neighborhood representation and neighbor identification for agents on a lattice with general neighborhood configurations, other techniques must be used when agents are able to move freely in space. Techniques for the analysis and representation of spatial data are applicable to the agent neighbor identification problem. This paper extends agent neighborhood simulation techniques from the lattice topology to continuous space, specifically R2. Algorithms based on hierarchical (quad trees) or non-hierarchical data structures (grid cells) are

  13. An Immune Agent for Web-Based AI Course

    ERIC Educational Resources Information Center

    Gong, Tao; Cai, Zixing

    2006-01-01

    To overcome weakness and faults of a web-based e-learning course such as Artificial Intelligence (AI), an immune agent was proposed, simulating a natural immune mechanism against a virus. The immune agent was built on the multi-dimension education agent model and immune algorithm. The web-based AI course was comprised of many files, such as HTML…

  14. An Active Learning Exercise for Introducing Agent-Based Modeling

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2013-01-01

    Recent developments in agent-based modeling as a method of systems analysis and optimization indicate that students in business analytics need an introduction to the terminology, concepts, and framework of agent-based modeling. This article presents an active learning exercise for MBA students in business analytics that demonstrates agent-based…

  15. SVM based target classification using RCS feature vectors

    NASA Astrophysics Data System (ADS)

    Bufler, Travis D.; Narayanan, Ram M.; Dogaru, Traian

    2015-05-01

    This paper investigates the application of SVM (Support Vector Machines) for the classification of stationary human targets and indoor clutter via spectral features. Applying Finite Difference Time Domain (FDTD) techniques allows us to examine the radar cross section (RCS) of humans and indoor clutter objects by utilizing different types of computer models. FDTD allows for the spectral characteristics to be acquired over a wide range of frequencies, polarizations, aspect angles, and materials. The acquired target and clutter RCS spectral characteristics are then investigated in terms of their potential for target classification using SVMs. Based upon variables such as frequency and polarization, a SVM classifier can be trained to classify unknown targets as a human or clutter. Furthermore, the application of feature selection is applied to the spectral characteristics to determine the SVM classification accuracy of a reduced dataset. Classification accuracies of nearly 90% are achieved using radial and polynomial kernels.

  16. Ensemble polarimetric SAR image classification based on contextual sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  17. Classification of LiDAR Data with Point Based Classification Methods

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2016-06-01

    LiDAR is one of the most effective systems for 3 dimensional (3D) data collection in wide areas. Nowadays, airborne LiDAR data is used frequently in various applications such as object extraction, 3D modelling, change detection and revision of maps with increasing point density and accuracy. The classification of the LiDAR points is the first step of LiDAR data processing chain and should be handled in proper way since the 3D city modelling, building extraction, DEM generation, etc. applications directly use the classified point clouds. The different classification methods can be seen in recent researches and most of researches work with the gridded LiDAR point cloud. In grid based data processing of the LiDAR data, the characteristic point loss in the LiDAR point cloud especially vegetation and buildings or losing height accuracy during the interpolation stage are inevitable. In this case, the possible solution is the use of the raw point cloud data for classification to avoid data and accuracy loss in gridding process. In this study, the point based classification possibilities of the LiDAR point cloud is investigated to obtain more accurate classes. The automatic point based approaches, which are based on hierarchical rules, have been proposed to achieve ground, building and vegetation classes using the raw LiDAR point cloud data. In proposed approaches, every single LiDAR point is analyzed according to their features such as height, multi-return, etc. then automatically assigned to the class which they belong to. The use of un-gridded point cloud in proposed point based classification process helped the determination of more realistic rule sets. The detailed parameter analyses have been performed to obtain the most appropriate parameters in the rule sets to achieve accurate classes. The hierarchical rule sets were created for proposed Approach 1 (using selected spatial-based and echo-based features) and Approach 2 (using only selected spatial-based features

  18. Pathological Bases for a Robust Application of Cancer Molecular Classification

    PubMed Central

    Diaz-Cano, Salvador J.

    2015-01-01

    Any robust classification system depends on its purpose and must refer to accepted standards, its strength relying on predictive values and a careful consideration of known factors that can affect its reliability. In this context, a molecular classification of human cancer must refer to the current gold standard (histological classification) and try to improve it with key prognosticators for metastatic potential, staging and grading. Although organ-specific examples have been published based on proteomics, transcriptomics and genomics evaluations, the most popular approach uses gene expression analysis as a direct correlate of cellular differentiation, which represents the key feature of the histological classification. RNA is a labile molecule that varies significantly according with the preservation protocol, its transcription reflect the adaptation of the tumor cells to the microenvironment, it can be passed through mechanisms of intercellular transference of genetic information (exosomes), and it is exposed to epigenetic modifications. More robust classifications should be based on stable molecules, at the genetic level represented by DNA to improve reliability, and its analysis must deal with the concept of intratumoral heterogeneity, which is at the origin of tumor progression and is the byproduct of the selection process during the clonal expansion and progression of neoplasms. The simultaneous analysis of multiple DNA targets and next generation sequencing offer the best practical approach for an analytical genomic classification of tumors. PMID:25898411

  19. A clinically applicable molecular-based classification for endometrial cancers

    PubMed Central

    Talhouk, A; McConechy, M K; Leung, S; Li-Chang, H H; Kwon, J S; Melnyk, N; Yang, W; Senz, J; Boyd, N; Karnezis, A N; Huntsman, D G; Gilks, C B; McAlpine, J N

    2015-01-01

    Background: Classification of endometrial carcinomas (ECs) by morphologic features is inconsistent, and yields limited prognostic and predictive information. A new system for classification based on the molecular categories identified in The Cancer Genome Atlas is proposed. Methods: Genomic data from the Cancer Genome Atlas (TCGA) support classification of endometrial carcinomas into four prognostically significant subgroups; we used the TCGA data set to develop surrogate assays that could replicate the TCGA classification, but without the need for the labor-intensive and cost-prohibitive genomic methodology. Combinations of the most relevant assays were carried forward and tested on a new independent cohort of 152 endometrial carcinoma cases, and molecular vs clinical risk group stratification was compared. Results: Replication of TCGA survival curves was achieved with statistical significance using multiple different molecular classification models (16 total tested). Internal validation supported carrying forward a classifier based on the following components: mismatch repair protein immunohistochemistry, POLE mutational analysis and p53 immunohistochemistry as a surrogate for ‘copy-number' status. The proposed molecular classifier was associated with clinical outcomes, as was stage, grade, lymph-vascular space invasion, nodal involvement and adjuvant treatment. In multivariable analysis both molecular classification and clinical risk groups were associated with outcomes, but differed greatly in composition of cases within each category, with half of POLE and mismatch repair loss subgroups residing within the clinically defined ‘high-risk' group. Combining the molecular classifier with clinicopathologic features or risk groups provided the highest C-index for discrimination of outcome survival curves. Conclusions: Molecular classification of ECs can be achieved using clinically applicable methods on formalin-fixed paraffin-embedded samples, and provides

  20. A new circulation type classification based upon Lagrangian air trajectories

    NASA Astrophysics Data System (ADS)

    Ramos, Alexandre; Sprenger, Michael; Wernli, Heini; Durán-Quesada, Ana María; Lorenzo, Maria Nieves; Gimeno, Luis

    2014-10-01

    A new classification method of the large-scale circulation characteristic for a specific target area (NW Iberian Peninsula) is presented, based on the analysis of 90-h backward trajectories arriving in this area calculated with the 3-D Lagrangian particle dispersion model FLEXPART. A cluster analysis is applied to separate the backward trajectories in up to five representative air streams for each day. Specific measures are then used to characterise the distinct air streams (e.g., curvature of the trajectories, cyclonic or anticyclonic flow, moisture evolution, origin and length of the trajectories). The robustness of the presented method is demonstrated in comparison with the Eulerian Lamb weather type classification. A case study of the 2003 heatwave is discussed in terms of the new Lagrangian circulation and the Lamb weather type classifications. It is shown that the new classification method adds valuable information about the pertinent meteorological conditions, which are missing in an Eulerian approach. The new method is climatologically evaluated for the five-year time period from December 1999 to November 2004. The ability of the method to capture the inter-seasonal circulation variability in the target region is shown. Furthermore, the multi-dimensional character of the classification is shortly discussed, in particular with respect to inter-seasonal differences. Finally, the relationship between the new Lagrangian classification and the precipitation in the target area is studied.

  1. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  2. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  3. Multiclass microarray data classification based on confidence evaluation.

    PubMed

    Yu, H L; Gao, S; Qin, B; Zhao, J

    2012-01-01

    Microarray technology is becoming a powerful tool for clinical diagnosis, as it has potential to discover gene expression patterns that are characteristic for a particular disease. To date, this possibility has received much attention in the context of cancer research, especially in tumor classification. However, most published articles have concentrated on the development of binary classification methods while neglected ubiquitous multiclass problems. Unfortunately, only a few multiclass classification approaches have had poor predictive accuracy. In an effort to improve classification accuracy, we developed a novel multiclass microarray data classification method. First, we applied a "one versus rest-support vector machine" to classify the samples. Then the classification confidence of each testing sample was evaluated according to its distribution in feature space and some with poor confidence were extracted. Next, a novel strategy, which we named as "class priority estimation method based on centroid distance", was used to make decisions about categories for those poor confidence samples. This approach was tested on seven benchmark multiclass microarray datasets, with encouraging results, demonstrating effectiveness and feasibility. PMID:22653582

  4. Rank preserving sparse learning for Kinect based scene classification.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong

    2013-10-01

    With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification. PMID:23846511

  5. Multiple kernel learning for sparse representation-based classification.

    PubMed

    Shrivastava, Ashish; Patel, Vishal M; Chellappa, Rama

    2014-07-01

    In this paper, we propose a multiple kernel learning (MKL) algorithm that is based on the sparse representation-based classification (SRC) method. Taking advantage of the nonlinear kernel SRC in efficiently representing the nonlinearities in the high-dimensional feature space, we propose an MKL method based on the kernel alignment criteria. Our method uses a two step training method to learn the kernel weights and sparse codes. At each iteration, the sparse codes are updated first while fixing the kernel mixing coefficients, and then the kernel mixing coefficients are updated while fixing the sparse codes. These two steps are repeated until a stopping criteria is met. The effectiveness of the proposed method is demonstrated using several publicly available image classification databases and it is shown that this method can perform significantly better than many competitive image classification algorithms. PMID:24835226

  6. Validating agent based models through virtual worlds.

    SciTech Connect

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  7. An Agent-Based Data Mining System for Ontology Evolution

    NASA Astrophysics Data System (ADS)

    Hadzic, Maja; Dillon, Darshan

    We have developed an evidence-based mental health ontological model that represents mental health in multiple dimensions. The ongoing addition of new mental health knowledge requires a continual update of the Mental Health Ontology. In this paper, we describe how the ontology evolution can be realized using a multi-agent system in combination with data mining algorithms. We use the TICSA methodology to design this multi-agent system which is composed of four different types of agents: Information agent, Data Warehouse agent, Data Mining agents and Ontology agent. We use UML 2.1 sequence diagrams to model the collaborative nature of the agents and a UML 2.1 composite structure diagram to model the structure of individual agents. The Mental Heath Ontology has the potential to underpin various mental health research experiments of a collaborative nature which are greatly needed in times of increasing mental distress and illness.

  8. A Spitzer-based classification of TNOs

    NASA Astrophysics Data System (ADS)

    Cooper, J. R.; Dalle Ore, C. M.; Emery, J. P.

    2011-12-01

    The outer reaches of the Solar System are residence to the icy bodies known as trans-Neptunian objects (TNOs). Implications such as low albedo and size have left this field relatively unexplored and in turn, encouraged the pursuit of these far-orbiting objects. A database of 48 objects was used by Fulchignoni et al. (2008) to cluster, model, and analyze the various spectra into classified taxa. The dataset adopted by Fulchignoni et al. (2008) was used as a baseline for visual colors to which Dalle Ore et al. (in prep) provided the significance of adding albedo measurements taken from Stansberry et al (2008). To further the classification accuracy, two near-infrared color bands from the Spitzer Space Telescope, centered at 3.55 and 4.50 microns, were supplemented with the previous 7-filter photometry. The 9-band compilation produced altered results from the previous studies; the addition of Spitzer data hopes to distinguish varying compositional properties of icy objects. We present a redefined taxonomy that may uncover clues to evolutionary trends of the TNO population.

  9. Atmospheric circulation classification comparison based on wildfires in Portugal

    NASA Astrophysics Data System (ADS)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  10. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  11. Space Situational Awareness using Market Based Agents

    NASA Astrophysics Data System (ADS)

    Sullivan, C.; Pier, E.; Gregory, S.; Bush, M.

    2012-09-01

    Space surveillance for the DoD is not limited to the Space Surveillance Network (SSN). Other DoD-owned assets have some existing capabilities for tasking but have no systematic way to work collaboratively with the SSN. These are run by diverse organizations including the Services, other defense and intelligence agencies and national laboratories. Beyond these organizations, academic and commercial entities have systems that possess SSA capability. Most all of these assets have some level of connectivity, security, and potential autonomy. Exploiting them in a mutually beneficial structure could provide a more comprehensive, efficient and cost effective solution for SSA. The collection of all potential assets, providers and consumers of SSA data comprises a market which is functionally illiquid. The development of a dynamic marketplace for SSA data could enable would-be providers the opportunity to sell data to SSA consumers for monetary or incentive based compensation. A well-conceived market architecture could drive down SSA data costs through increased supply and improve efficiency through increased competition. Oceanit will investigate market and market agent architectures, protocols, standards, and incentives toward producing high-volume/low-cost SSA.

  12. Directional wavelet based features for colonic polyp classification.

    PubMed

    Wimmer, Georg; Tamaki, Toru; Tischendorf, J J W; Häfner, Michael; Yoshida, Shigeto; Tanaka, Shinji; Uhl, Andreas

    2016-07-01

    In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state

  13. Efficient Classification-Based Relabeling in Mixture Models

    PubMed Central

    Cron, Andrew J.; West, Mike

    2011-01-01

    Effective component relabeling in Bayesian analyses of mixture models is critical to the routine use of mixtures in classification with analysis based on Markov chain Monte Carlo methods. The classification-based relabeling approach here is computationally attractive and statistically effective, and scales well with sample size and number of mixture components concordant with enabling routine analyses of increasingly large data sets. Building on the best of existing methods, practical relabeling aims to match data:component classification indicators in MCMC iterates with those of a defined reference mixture distribution. The method performs as well as or better than existing methods in small dimensional problems, while being practically superior in problems with larger data sets as the approach is scalable. We describe examples and computational benchmarks, and provide supporting code with efficient computational implementation of the algorithm that will be of use to others in practical applications of mixture models. PMID:21660126

  14. Robust materials classification based on multispectral polarimetric BRDF imagery

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Zhao, Yong-qiang; Luo, Li; Liu, Dan; Pan, Quan

    2009-07-01

    When light is reflected from object surface, its spectral characteristics will be affected by surface's elemental composition, while its polarimetric characteristics will be determined by the surface's orientation, roughness and conductance. Multispectral polarimetric imaging technique records both the spectral and polarimetric characteristics of the light, and adds dimensions to the spatial intensity typically acquired and it also could provide unique and discriminatory information which may argument material classification techniques. But for the sake of non-Lambert of object surface, the spectral and polarimetric characteristics will change along with the illumination angle and observation angle. If BRDF is ignored during the material classification, misclassification is inevitable. To get a feature that is robust material classification to non-Lambert surface, a new classification methods based on multispectral polarimetric BRDF characteristics is proposed in this paper. Support Vector Machine method is adopted to classify targets in clutter grass environments. The train sets are obtained in the sunny, while the test sets are got from three different weather and detected conditions, at last the classification results based on multispectral polarimetric BRDF features are compared with other two results based on spectral information, and multispectral polarimetric information under sunny, cloudy and dark conditions respectively. The experimental results present that the method based on multispectral polarimetric BRDF features performs the most robust, and the classification precision also surpasses the other two. When imaging objects under the dark weather, it's difficult to distinguish different materials using spectral features as the grays between backgrounds and targets in each different wavelength would be very close, but the method proposed in this paper would efficiently solve this problem.

  15. NIM: A Node Influence Based Method for Cancer Classification

    PubMed Central

    Wang, Yiwen; Yang, Jianhua

    2014-01-01

    The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM) is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART. PMID:25180045

  16. Agent Persuasion Mechanism of Acquaintance

    NASA Astrophysics Data System (ADS)

    Jinghua, Wu; Wenguang, Lu; Hailiang, Meng

    Agent persuasion can improve negotiation efficiency in dynamic environment based on its initiative and autonomy, and etc., which is being affected much more by acquaintance. Classification of acquaintance on agent persuasion is illustrated, and the agent persuasion model of acquaintance is also illustrated. Then the concept of agent persuasion degree of acquaintance is given. Finally, relative interactive mechanism is elaborated.

  17. Impact of Information based Classification on Network Epidemics.

    PubMed

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-01-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348

  18. Impact of Information based Classification on Network Epidemics

    PubMed Central

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-01-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348

  19. Impact of Information based Classification on Network Epidemics

    NASA Astrophysics Data System (ADS)

    Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini

    2016-06-01

    Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results.

  20. Rule-based Cervical Spine Defect Classification Using Medical Narratives.

    PubMed

    Deng, Yihan; Groll, Mathias Jacob; Denecke, Kerstin

    2015-01-01

    Classifying the defects occurring at the cervical spine provides the basis for surgical treatment planning and therapy recommendation. This process requires evidence from patient records. Further, the degree of a defect needs to be encoded in a standardized from to facilitate data exchange and multimodal interoperability. In this paper, a concept for automatic defect classification based on information extracted from textual data of patient records is presented. In a retrospective study, the classifier is applied to clinical documents and the classification results are evaluated. PMID:26262337

  1. Classification of CT-brain slices based on local histograms

    NASA Astrophysics Data System (ADS)

    Avrunin, Oleg G.; Tymkovych, Maksym Y.; Pavlov, Sergii V.; Timchik, Sergii V.; Kisała, Piotr; Orakbaev, Yerbol

    2015-12-01

    Neurosurgical intervention is a very complicated process. Modern operating procedures based on data such as CT, MRI, etc. Automated analysis of these data is an important task for researchers. Some modern methods of brain-slice segmentation use additional data to process these images. Classification can be used to obtain this information. To classify the CT images of the brain, we suggest using local histogram and features extracted from them. The paper shows the process of feature extraction and classification CT-slices of the brain. The process of feature extraction is specialized for axial cross-section of the brain. The work can be applied to medical neurosurgical systems.

  2. Effect of Pansharpened Image on Some of Pixel Based and Object Based Classification Accuracy

    NASA Astrophysics Data System (ADS)

    Karakus, P.; Karabork, H.

    2016-06-01

    Classification is the most important method to determine type of crop contained in a region for agricultural planning. There are two types of the classification. First is pixel based and the other is object based classification method. While pixel based classification methods are based on the information in each pixel, object based classification method is based on objects or image objects that formed by the combination of information from a set of similar pixels. Multispectral image contains a higher degree of spectral resolution than a panchromatic image. Panchromatic image have a higher spatial resolution than a multispectral image. Pan sharpening is a process of merging high spatial resolution panchromatic and high spectral resolution multispectral imagery to create a single high resolution color image. The aim of the study was to compare the potential classification accuracy provided by pan sharpened image. In this study, SPOT 5 image was used dated April 2013. 5m panchromatic image and 10m multispectral image are pan sharpened. Four different classification methods were investigated: maximum likelihood, decision tree, support vector machine at the pixel level and object based classification methods. SPOT 5 pan sharpened image was used to classification sun flowers and corn in a study site located at Kadirli region on Osmaniye in Turkey. The effects of pan sharpened image on classification results were also examined. Accuracy assessment showed that the object based classification resulted in the better overall accuracy values than the others. The results that indicate that these classification methods can be used for identifying sun flower and corn and estimating crop areas.

  3. Exploring cooperation and competition using agent-based modeling

    PubMed Central

    Elliott, Euel; Kiel, L. Douglas

    2002-01-01

    Agent-based modeling enhances our capacity to model competitive and cooperative behaviors at both the individual and group levels of analysis. Models presented in these proceedings produce consistent results regarding the relative fragility of cooperative regimes among agents operating under diverse rules. These studies also show how competition and cooperation may generate change at both the group and societal level. Agent-based simulation of competitive and cooperative behaviors may reveal the greatest payoff to social science research of all agent-based modeling efforts because of the need to better understand the dynamics of these behaviors in an increasingly interconnected world. PMID:12011396

  4. Volatility clustering in agent based market models

    NASA Astrophysics Data System (ADS)

    Giardina, Irene; Bouchaud, Jean-Philippe

    2003-06-01

    We define and study a market model, where agents have different strategies among which they can choose, according to their relative profitability, with the possibility of not participating to the market. The price is updated according to the excess demand, and the wealth of the agents is properly accounted for. Only two parameters play a significant role: one describes the impact of trading on the price, and the other describes the propensity of agents to be trend following or contrarian. We observe three different regimes, depending on the value of these two parameters: an oscillating phase with bubbles and crashes, an intermittent phase and a stable ‘rational’ market phase. The statistics of price changes in the intermittent phase resembles that of real price changes, with small linear correlations, fat tails and long-range volatility clustering. We discuss how the time dependence of these two parameters spontaneously drives the system in the intermittent region.

  5. Reordering based integrative expression profiling for microarray classification

    PubMed Central

    2012-01-01

    Background Current network-based microarray analysis uses the information of interactions among concerned genes/gene products, but still considers each gene expression individually. We propose an organized knowledge-supervised approach - Integrative eXpression Profiling (IXP), to improve microarray classification accuracy, and help discover groups of genes that have been too weak to detect individually by traditional ways. To implement IXP, ant colony optimization reordering (ACOR) algorithm is used to group functionally related genes in an ordered way. Results Using Alzheimer's disease (AD) as an example, we demonstrate how to apply ACOR-based IXP approach into microarray classifications. Using a microarray dataset - GSE1297 with 31 samples as training set, the result for the blinded classification on another microarray dataset - GSE5281 with 151 samples, shows that our approach can improve accuracy from 74.83% to 82.78%. A recently-published 1372-probe signature for AD can only achieve 61.59% accuracy in the same condition. The ACOR-based IXP approach also has better performance than the IXP approach based on classic network ranking, graph clustering, and random-ordering methods in an overall classification performance comparison. Conclusions The ACOR-based IXP approach can serve as a knowledge-supervised feature transformation approach to increase classification accuracy dramatically, by transforming each gene expression profile to an integrated expression files as features inputting into standard classifiers. The IXP approach integrates both gene expression information and organized knowledge - disease gene/protein network topology information, which is represented as both network node weights (local topological properties) and network node orders (global topological characteristics). PMID:22536860

  6. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  7. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  8. Risk-based Classification of Incidents

    NASA Technical Reports Server (NTRS)

    Greenwell, William S.; Knight, John C.; Strunk, Elisabeth A.

    2003-01-01

    As the penetration of software into safety-critical systems progresses, accidents and incidents involving software will inevitably become more frequent. Identifying lessons from these occurrences and applying them to existing and future systems is essential if recurrences are to be prevented. Unfortunately, investigative agencies do not have the resources to fully investigate every incident under their jurisdictions and domains of expertise and thus must prioritize certain occurrences when allocating investigative resources. In the aviation community, most investigative agencies prioritize occurrences based on the severity of their associated losses, allocating more resources to accidents resulting in injury to passengers or extensive aircraft damage. We argue that this scheme is inappropriate because it undervalues incidents whose recurrence could have a high potential for loss while overvaluing fairly straightforward accidents involving accepted risks. We then suggest a new strategy for prioritizing occurrences based on the risk arising from incident recurrence.

  9. An Extension Dynamic Model Based on BDI Agent

    NASA Astrophysics Data System (ADS)

    Yu, Wang; Feng, Zhu; Hua, Geng; WangJing, Zhu

    this paper's researching is based on the model of BDI Agent. Firstly, This paper analyze the deficiencies of the traditional BDI Agent model, Then propose an extension dynamic model of BDI Agent based on the traditional ones. It can quickly achieve the internal interaction of the tradition model of BDI Agent, deal with complex issues under dynamic and open environment and achieve quick reaction of the model. The new model is a natural and reasonable model by verifying the origin of civilization using the model of monkeys to eat sweet potato based on the design of the extension dynamic model. It is verified to be feasible by comparing the extended dynamic BDI Agent model with the traditional BDI Agent Model uses the SWARM, it has important theoretical significance.

  10. Competency Based Curriculum for Real Estate Agent.

    ERIC Educational Resources Information Center

    McCloy, Robert J.

    This publication is a curriculum and teaching guide for preparing real estate agents in the state of West Virginia. The guide contains 30 units, or lessons. Each lesson is designed to cover three to five hours of instruction time. Competencies provided for each lesson are stated in terms of what the student should be able to do as a result of the…

  11. AGENT-BASED MODELING OF INDUSTRIAL ECOSYSTEMS

    EPA Science Inventory

    The objectives of this research are to investigate behavioral and organizational questions associated with environmental regulation of firms, and to test specifically whether a bottom-up approach that highlights principal-agent problems offers new insights and empirical validi...

  12. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    NASA Astrophysics Data System (ADS)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

  13. An Agent-Based Cockpit Task Management System

    NASA Technical Reports Server (NTRS)

    Funk, Ken

    1997-01-01

    An agent-based program to facilitate Cockpit Task Management (CTM) in commercial transport aircraft is developed and evaluated. The agent-based program called the AgendaManager (AMgr) is described and evaluated in a part-task simulator study using airline pilots.

  14. A Visual mining based framework for classification accuracy estimation

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal Vijayakumar

    2013-12-01

    Classification techniques have been widely used in different remote sensing applications and correct classification of mixed pixels is a tedious task. Traditional approaches adopt various statistical parameters, however does not facilitate effective visualisation. Data mining tools are proving very helpful in the classification process. We propose a visual mining based frame work for accuracy assessment of classification techniques using open source tools such as WEKA and PREFUSE. These tools in integration can provide an efficient approach for getting information about improvements in the classification accuracy and helps in refining training data set. We have illustrated framework for investigating the effects of various resampling methods on classification accuracy and found that bilinear (BL) is best suited for preserving radiometric characteristics. We have also investigated the optimal number of folds required for effective analysis of LISS-IV images. Techniki klasyfikacji są szeroko wykorzystywane w różnych aplikacjach teledetekcyjnych, w których poprawna klasyfikacja pikseli stanowi poważne wyzwanie. Podejście tradycyjne wykorzystujące różnego rodzaju parametry statystyczne nie zapewnia efektywnej wizualizacji. Wielce obiecujące wydaje się zastosowanie do klasyfikacji narzędzi do eksploracji danych. W artykule zaproponowano podejście bazujące na wizualnej analizie eksploracyjnej, wykorzystujące takie narzędzia typu open source jak WEKA i PREFUSE. Wymienione narzędzia ułatwiają korektę pół treningowych i efektywnie wspomagają poprawę dokładności klasyfikacji. Działanie metody sprawdzono wykorzystując wpływ różnych metod resampling na zachowanie dokładności radiometrycznej i uzyskując najlepsze wyniki dla metody bilinearnej (BL).

  15. Segmentation Based Fuzzy Classification of High Resolution Images

    NASA Astrophysics Data System (ADS)

    Rao, Mukund; Rao, Suryaprakash; Masser, Ian; Kasturirangan, K.

    Information extraction from satellite images is the process of delineation of entities in the image which pertain to some feature on the earth and to which on associating an attribute, a classification of the image is obtained. Classification is a common technique to extract information from remote sensing data and, by and large, the common classification techniques mainly exploit the spectral characteristics of remote sensing images and attempt to detect patterns in spectral information to classify images. These are based on a per-pixel analysis of the spectral information, "clustering" or "grouping" of pixels is done to generate meaningful thematic information. Most of the classification techniques apply statistical pattern recognition of image spectral vectors to "label" each pixel with appropriate class information from a set of training information. On the other hand, Segmentation is not new, but it is yet seldom used in image processing of remotely sensed data. Although there has been a lot of development in segmentation of grey tone images in this field and other fields, like robotic vision, there has been little progress in segmentation of colour or multi-band imagery. Especially within the last two years many new segmentation algorithms as well as applications were developed, but not all of them lead to qualitatively convincing results while being robust and operational. One reason is that the segmentation of an image into a given number of regions is a problem with a huge number of possible solutions. Newer algorithms based on fractal approach could eventually revolutionize image processing of remotely sensed data. The paper looks at applying spatial concepts to image processing, paving the way to algorithmically formulate some more advanced aspects of cognition and inference. In GIS-based spatial analysis, vector-based tools already have been able to support advanced tasks generating new knowledge. By identifying objects (as segmentation results) from

  16. Detection/classification/quantification of chemical agents using an array of surface acoustic wave (SAW) devices

    NASA Astrophysics Data System (ADS)

    Milner, G. Martin

    2005-05-01

    ChemSentry is a portable system used to detect, identify, and quantify chemical warfare (CW) agents. Electro chemical (EC) cell sensor technology is used for blood agents and an array of surface acoustic wave (SAW) sensors is used for nerve and blister agents. The combination of the EC cell and the SAW array provides sufficient sensor information to detect, classify and quantify all CW agents of concern using smaller, lighter, lower cost units. Initial development of the SAW array and processing was a key challenge for ChemSentry requiring several years of fundamental testing of polymers and coating methods to finalize the sensor array design in 2001. Following the finalization of the SAW array, nearly three (3) years of intensive testing in both laboratory and field environments were required in order to gather sufficient data to fully understand the response characteristics. Virtually unbounded permutations of agent characteristics and environmental characteristics must be considered in order to operate against all agents and all environments of interest to the U.S. military and other potential users of ChemSentry. The resulting signal processing design matched to this extensive body of measured data (over 8,000 agent challenges and 10,000 hours of ambient data) is considered to be a significant advance in state-of-the-art for CW agent detection.

  17. Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems

    ERIC Educational Resources Information Center

    Gifford, Christopher M.

    2009-01-01

    This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…

  18. Classification of Regional Ionospheric Disturbances Based on Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Begüm Terzi, Merve; Arikan, Feza; Arikan, Orhan; Karatay, Secil

    2016-07-01

    Ionosphere is an anisotropic, inhomogeneous, time varying and spatio-temporally dispersive medium whose parameters can be estimated almost always by using indirect measurements. Geomagnetic, gravitational, solar or seismic activities cause variations of ionosphere at various spatial and temporal scales. This complex spatio-temporal variability is challenging to be identified due to extensive scales in period, duration, amplitude and frequency of disturbances. Since geomagnetic and solar indices such as Disturbance storm time (Dst), F10.7 solar flux, Sun Spot Number (SSN), Auroral Electrojet (AE), Kp and W-index provide information about variability on a global scale, identification and classification of regional disturbances poses a challenge. The main aim of this study is to classify the regional effects of global geomagnetic storms and classify them according to their risk levels. For this purpose, Total Electron Content (TEC) estimated from GPS receivers, which is one of the major parameters of ionosphere, will be used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. In this work, for the automated classification of the regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. SVM is a supervised learning model used for classification with associated learning algorithm that analyze the data and recognize patterns. In addition to performing linear classification, SVM can efficiently perform nonlinear classification by embedding data into higher dimensional feature spaces. Performance of the developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from the GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing the developed classification

  19. A Classification of Mediterranean Cyclones Based on Global Analyses

    NASA Technical Reports Server (NTRS)

    Reale, Oreste; Atlas, Robert

    2003-01-01

    The Mediterranean Sea region is dominated by baroclinic and orographic cyclogenesis. However, previous work has demonstrated the existence of rare but intense subsynoptic-scale cyclones displaying remarkable similarities to tropical cyclones and polar lows, including, but not limited to, an eye-like feature in the satellite imagery. The terms polar low and tropical cyclone have been often used interchangeably when referring to small-scale, convective Mediterranean vortices and no definitive statement has been made so far on their nature, be it sub-tropical or polar. Moreover, most of the classifications of Mediterranean cyclones have neglected the small-scale convective vortices, focusing only on the larger-scale and far more common baroclinic cyclones. A classification of all Mediterranean cyclones based on operational global analyses is proposed The classification is based on normalized horizontal shear, vertical shear, scale, low versus mid-level vorticity, low-level temperature gradients, and sea surface temperatures. In the classification system there is a continuum of possible events, according to the increasing role of barotropic instability and decreasing role of baroclinic instability. One of the main results is that the Mediterranean tropical cyclone-like vortices and the Mediterranean polar lows appear to be different types of events, in spite of the apparent similarity of their satellite imagery. A consistent terminology is adopted, stating that tropical cyclone- like vortices are the less baroclinic of all, followed by polar lows, cold small-scale cyclones and finally baroclinic lee cyclones. This classification is based on all the cyclones which occurred in a four-year period (between 1996 and 1999). Four cyclones, selected among all the ones which developed during this time-frame, are analyzed. Particularly, the classification allows to discriminate between two cyclones (occurred in October 1996 and in March 1999) which both display a very well

  20. Bayesian outcome-based strategy classification.

    PubMed

    Lee, Michael D

    2016-03-01

    Hilbig and Moshagen (Psychonomic Bulletin & Review, 21, 1431-1443, 2014) recently developed a method for making inferences about the decision processes people use in multi-attribute forced choice tasks. Their paper makes a number of worthwhile theoretical and methodological contributions. Theoretically, they provide an insightful psychological motivation for a probabilistic extension of the widely-used "weighted additive" (WADD) model, and show how this model, as well as other important models like "take-the-best" (TTB), can and should be expressed in terms of meaningful priors. Methodologically, they develop an inference approach based on the Minimum Description Length (MDL) principles that balances both the goodness-of-fit and complexity of the decision models they consider. This paper aims to preserve these useful contributions, but provide a complementary Bayesian approach with some theoretical and methodological advantages. We develop a simple graphical model, implemented in JAGS, that allows for fully Bayesian inferences about which models people use to make decisions. To demonstrate the Bayesian approach, we apply it to the models and data considered by Hilbig and Moshagen (Psychonomic Bulletin & Review, 21, 1431-1443, 2014), showing how a prior predictive analysis of the models, and posterior inferences about which models people use and the parameter settings at which they use them, can contribute to our understanding of human decision making. PMID:25697091

  1. Similarity-Based Classification in Partially Labeled Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Ming; Shang, Ming-Sheng; Lü, Linyuan

    Two main difficulties in the problem of classification in partially labeled networks are the sparsity of the known labeled nodes and inconsistency of label information. To address these two difficulties, we propose a similarity-based method, where the basic assumption is that two nodes are more likely to be categorized into the same class if they are more similar. In this paper, we introduce ten similarity indices defined based on the network structure. Empirical results on the co-purchase network of political books show that the similarity-based method can, to some extent, overcome these two difficulties and give higher accurate classification than the relational neighbors method, especially when the labeled nodes are sparse. Furthermore, we find that when the information of known labeled nodes is sufficient, the indices considering only local information can perform as good as those global indices while having much lower computational complexity.

  2. Object-Based Classification and Change Detection of Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Park, J. G.; Harada, I.; Kwak, Y.

    2016-06-01

    Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.

  3. Networks based on collisions among mobile agents

    NASA Astrophysics Data System (ADS)

    González, Marta C.; Lind, Pedro G.; Herrmann, Hans J.

    2006-12-01

    We investigate in detail a recent model of colliding mobile agents [M.C. González, P.G. Lind, H.J. Herrmann, Phys. Rev. Lett. 96 (2006) 088702. cond-mat/0602091], used as an alternative approach for constructing evolving networks of interactions formed by collisions governed by suitable dynamical rules. The system of mobile agents evolves towards a quasi-stationary state which is, apart from small fluctuations, well characterized by the density of the system and the residence time of the agents. The residence time defines a collision rate, and by varying this collision rate, the system percolates at a critical value, with the emergence of a giant cluster whose critical exponents are the ones of two-dimensional percolation. Further, the degree and clustering coefficient distributions, and the average path length, show that the network associated with such a system presents non-trivial features which, depending on the collision rules, enables one not only to recover the main properties of standard networks, such as exponential, random and scale-free networks, but also to obtain other topological structures. To illustrate, we show a specific example where the obtained structure has topological features which characterize the structure and evolution of social networks accurately in different contexts, ranging from networks of acquaintances to networks of sexual contacts.

  4. Character-based DNA barcoding: a superior tool for species classification.

    PubMed

    Bergmann, Tjard; Hadrys, Heike; Breves, Gerhard; Schierwater, Bernd

    2009-01-01

    In zoonosis research only correct assigned host-agent-vector associations can lead to success. If most biological species on Earth, from agent to host and from procaryotes to vertebrates, are still undetected, the development of a reliable and universal diversity detection tool becomes a conditio sine qua non. In this context, in breathtaking speed, modern molecular-genetic techniques have become acknowledged tools for the classification of life forms at all taxonomic levels. While previous DNA-barcoding techniques were criticised for several reasons (Moritz and Cicero, 2004; Rubinoff et al., 2006a, b; Rubinoff, 2006; Rubinoff and Haines, 2006) a new approach, the so called CAOS-barcoding (Character Attribute Organisation System), avoids most of the weak points. Traditional DNA-barcoding approaches are based on distances, i. e. they use genetic distances and tree construction algorithms for the classification of species or lineages. The definition of limit values is enforced and prohibits a discrete or clear assignment. In comparison, the new character-based barcoding (CAOS-barcoding; DeSalle et al., 2005; DeSalle, 2006; Rach et al., 2008) works with discrete single characters and character combinations which permits a clear, unambiguous classification. In Hannover (Germany) we are optimising this system and developing a semiautomatic high-throughput procedure for hosts, agents and vectors being studied within the Zoonosis Centre of the "Stiftung Tierärztliche Hochschule Hannover". Our primary research is concentrated on insects, the most successful and species-rich animal group on Earth (every fourth animal is a bug). One subgroup, the winged insects (Pterygota), represents the outstanding majority of all zoonosis relevant animal vectors. PMID:19999380

  5. Improving representation-based classification for robust face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhi; Zhang, Zheng; Li, Zhengming; Chen, Yan; Shi, Jian

    2014-06-01

    The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier.

  6. Metagenome fragment classification based on multiple motif-occurrence profiles.

    PubMed

    Matsushita, Naoki; Seno, Shigeto; Takenaka, Yoichi; Matsuda, Hideo

    2014-01-01

    A vast amount of metagenomic data has been obtained by extracting multiple genomes simultaneously from microbial communities, including genomes from uncultivable microbes. By analyzing these metagenomic data, novel microbes are discovered and new microbial functions are elucidated. The first step in analyzing these data is sequenced-read classification into reference genomes from which each read can be derived. The Naïve Bayes Classifier is a method for this classification. To identify the derivation of the reads, this method calculates a score based on the occurrence of a DNA sequence motif in each reference genome. However, large differences in the sizes of the reference genomes can bias the scoring of the reads. This bias might cause erroneous classification and decrease the classification accuracy. To address this issue, we have updated the Naïve Bayes Classifier method using multiple sets of occurrence profiles for each reference genome by normalizing the genome sizes, dividing each genome sequence into a set of subsequences of similar length and generating profiles for each subsequence. This multiple profile strategy improves the accuracy of the results generated by the Naïve Bayes Classifier method for simulated and Sargasso Sea datasets. PMID:25210663

  7. Proposed Classification of Auriculotemporal Nerve, Based on the Root System

    PubMed Central

    Komarnitki, Iulian; Tomczyk, Jacek; Ciszek, Bogdan; Zalewska, Marta

    2015-01-01

    The topography of the auriculotemporal nerve (ATN) root system is the main criterion of this nerve classification. Previous publications indicate that ATN may have between one and five roots. Most common is a one- or two-root variant of the nerve structure. The problem of many publications is the inconsistency of nomenclature which concerns the terms “roots”, “connecting branches”, or “branches” that are used to identify the same structures. This study was performed on 80 specimens (40 adults and 40 fetuses) to propose a classification based on: (i) the number of roots, (ii) way of root division, and (iii) configuration of interradicular fibers that form the ATN trunk. This new classification is a remedy for inconsistency of nomenclature of ATN in the infratemporal fossa. This classification system has proven beneficial when organizing all ATN variants described in previous studies and could become a helpful tool for surgeons and dentists. Examination of ATN from the infratemporal fossa of fetuses (the youngest was at 18 weeks gestational age) showed that, at that stage, the nerve is fully developed. PMID:25856464

  8. Structure-based classification and ontology in chemistry

    PubMed Central

    2012-01-01

    Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic

  9. Tutorial on agent-based modeling and simulation. Part 2 : how to model with agents.

    SciTech Connect

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2006-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of interacting autonomous agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to do research. Some have gone so far as to contend that ABMS is a new way of doing science. Computational advances make possible a growing number of agent-based applications across many fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling the growth and decline of ancient civilizations to modeling the complexities of the human immune system, and many more. This tutorial describes the foundations of ABMS, identifies ABMS toolkits and development methods illustrated through a supply chain example, and provides thoughts on the appropriate contexts for ABMS versus conventional modeling techniques.

  10. An AERONET-based aerosol classification using the Mahalanobis distance

    NASA Astrophysics Data System (ADS)

    Hamill, Patrick; Giordano, Marco; Ward, Carolyne; Giles, David; Holben, Brent

    2016-09-01

    We present an aerosol classification based on AERONET aerosol data from 1993 to 2012. We used the AERONET Level 2.0 almucantar aerosol retrieval products to define several reference aerosol clusters which are characteristic of the following general aerosol types: Urban-Industrial, Biomass Burning, Mixed Aerosol, Dust, and Maritime. The classification of a particular aerosol observation as one of these aerosol types is determined by its five-dimensional Mahalanobis distance to each reference cluster. We have calculated the fractional aerosol type distribution at 190 AERONET sites, as well as the monthly variation in aerosol type at those locations. The results are presented on a global map and individually in the supplementary material. Our aerosol typing is based on recognizing that different geographic regions exhibit characteristic aerosol types. To generate reference clusters we only keep data points that lie within a Mahalanobis distance of 2 from the centroid. Our aerosol characterization is based on the AERONET retrieved quantities, therefore it does not include low optical depth values. The analysis is based on "point sources" (the AERONET sites) rather than globally distributed values. The classifications obtained will be useful in interpreting aerosol retrievals from satellite borne instruments.

  11. A Sieving ANN for Emotion-Based Movie Clip Classification

    NASA Astrophysics Data System (ADS)

    Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon

    Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.

  12. Geometric nomenclature and classification of RNA base pairs.

    PubMed Central

    Leontis, N B; Westhof, E

    2001-01-01

    Non-Watson-Crick base pairs mediate specific interactions responsible for RNA-RNA self-assembly and RNA-protein recognition. An unambiguous and descriptive nomenclature with well-defined and nonoverlapping parameters is needed to communicate concisely structural information about RNA base pairs. The definitions should reflect underlying molecular structures and interactions and, thus, facilitate automated annotation, classification, and comparison of new RNA structures. We propose a classification based on the observation that the planar edge-to-edge, hydrogen-bonding interactions between RNA bases involve one of three distinct edges: the Watson-Crick edge, the Hoogsteen edge, and the Sugar edge (which includes the 2'-OH and which has also been referred to as the Shallow-groove edge). Bases can interact in either of two orientations with respect to the glycosidic bonds, cis or trans relative to the hydrogen bonds. This gives rise to 12 basic geometric types with at least two H bonds connecting the bases. For each geometric type, the relative orientations of the strands can be easily deduced. High-resolution examples of 11 of the 12 geometries are presently available. Bifurcated pairs, in which a single exocyclic carbonyl or amino group of one base directly contacts the edge of a second base, and water-inserted pairs, in which single functional groups on each base interact directly, are intermediate between two of the standard geometries. The nomenclature facilitates the recognition of isosteric relationships among base pairs within each geometry, and thus facilitates the recognition of recurrent three-dimensional motifs from comparison of homologous sequences. Graphical conventions are proposed for displaying non-Watson-Crick interactions on a secondary structure diagram. The utility of the classification in homology modeling of RNA tertiary motifs is illustrated. PMID:11345429

  13. Towards an agent-oriented programming language based on Scala

    NASA Astrophysics Data System (ADS)

    Mitrović, Dejan; Ivanović, Mirjana; Budimac, Zoran

    2012-09-01

    Scala and its multi-threaded model based on actors represent an excellent framework for developing purely reactive agents. This paper presents an early research on extending Scala with declarative programming constructs, which would result in a new agent-oriented programming language suitable for developing more advanced, BDI agent architectures. The main advantage the new language over many other existing solutions for programming BDI agents is a natural and straightforward integration of imperative and declarative programming constructs, fitted under a single development framework.

  14. Access Control for Agent-based Computing: A Distributed Approach.

    ERIC Educational Resources Information Center

    Antonopoulos, Nick; Koukoumpetsos, Kyriakos; Shafarenko, Alex

    2001-01-01

    Discusses the mobile software agent paradigm that provides a foundation for the development of high performance distributed applications and presents a simple, distributed access control architecture based on the concept of distributed, active authorization entities (lock cells), any combination of which can be referenced by an agent to provide…

  15. Expected energy-based restricted Boltzmann machine for classification.

    PubMed

    Elfwing, S; Uchibe, E; Doya, K

    2015-04-01

    In classification tasks, restricted Boltzmann machines (RBMs) have predominantly been used in the first stage, either as feature extractors or to provide initialization of neural networks. In this study, we propose a discriminative learning approach to provide a self-contained RBM method for classification, inspired by free-energy based function approximation (FE-RBM), originally proposed for reinforcement learning. For classification, the FE-RBM method computes the output for an input vector and a class vector by the negative free energy of an RBM. Learning is achieved by stochastic gradient-descent using a mean-squared error training objective. In an earlier study, we demonstrated that the performance and the robustness of FE-RBM function approximation can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that the learning performance of RBM function approximation can be further improved by computing the output by the negative expected energy (EE-RBM), instead of the negative free energy. To create a deep learning architecture, we stack several RBMs on top of each other. We also connect the class nodes to all hidden layers to try to improve the performance even further. We validate the classification performance of EE-RBM using the MNIST data set and the NORB data set, achieving competitive performance compared with other classifiers such as standard neural networks, deep belief networks, classification RBMs, and support vector machines. The purpose of using the NORB data set is to demonstrate that EE-RBM with binary input nodes can achieve high performance in the continuous input domain. PMID:25318375

  16. Exploring complex dynamics in multi agent-based intelligent systems: Theoretical and experimental approaches using the Multi Agent-based Behavioral Economic Landscape (MABEL) model

    NASA Astrophysics Data System (ADS)

    Alexandridis, Konstantinos T.

    This dissertation adopts a holistic and detailed approach to modeling spatially explicit agent-based artificial intelligent systems, using the Multi Agent-based Behavioral Economic Landscape (MABEL) model. The research questions that addresses stem from the need to understand and analyze the real-world patterns and dynamics of land use change from a coupled human-environmental systems perspective. Describes the systemic, mathematical, statistical, socio-economic and spatial dynamics of the MABEL modeling framework, and provides a wide array of cross-disciplinary modeling applications within the research, decision-making and policy domains. Establishes the symbolic properties of the MABEL model as a Markov decision process, analyzes the decision-theoretic utility and optimization attributes of agents towards comprising statistically and spatially optimal policies and actions, and explores the probabilogic character of the agents' decision-making and inference mechanisms via the use of Bayesian belief and decision networks. Develops and describes a Monte Carlo methodology for experimental replications of agent's decisions regarding complex spatial parcel acquisition and learning. Recognizes the gap on spatially-explicit accuracy assessment techniques for complex spatial models, and proposes an ensemble of statistical tools designed to address this problem. Advanced information assessment techniques such as the Receiver-Operator Characteristic curve, the impurity entropy and Gini functions, and the Bayesian classification functions are proposed. The theoretical foundation for modular Bayesian inference in spatially-explicit multi-agent artificial intelligent systems, and the ensembles of cognitive and scenario assessment modular tools build for the MABEL model are provided. Emphasizes the modularity and robustness as valuable qualitative modeling attributes, and examines the role of robust intelligent modeling as a tool for improving policy-decisions related to land

  17. A proposed classification scheme for Ada-based software products

    NASA Technical Reports Server (NTRS)

    Cernosek, Gary J.

    1986-01-01

    As the requirements for producing software in the Ada language become a reality for projects such as the Space Station, a great amount of Ada-based program code will begin to emerge. Recognizing the potential for varying levels of quality to result in Ada programs, what is needed is a classification scheme that describes the quality of a software product whose source code exists in Ada form. A 5-level classification scheme is proposed that attempts to decompose this potentially broad spectrum of quality which Ada programs may possess. The number of classes and their corresponding names are not as important as the mere fact that there needs to be some set of criteria from which to evaluate programs existing in Ada. An exact criteria for each class is not presented, nor are any detailed suggestions of how to effectively implement this quality assessment. The idea of Ada-based software classification is introduced and a set of requirements from which to base further research and development is suggested.

  18. An entropy-based classification scheme of meandering rivers

    NASA Astrophysics Data System (ADS)

    Abad, J. D.; Gutierrez, R. R.

    2015-12-01

    Some researchers have highlighted the fact that most of the river classification schemes have not evolved at the same pace as river morphodynamics models have done it. The most prevailing classification scheme of meandering river was proposed by Brice (1975) and is mainly based on observational criteria. Likewise, thermodynamics principles have been applied on geomorphology over a relatively long period of time. Thus, for instance, a strong analogy between meander angle of deflection and the distribution of momentum in gas dynamics has been identified. Based on the analysis of curvature data from 16 natural meanders (which totals 52 realizations) ranging from class B to class G related to the Brice classification scheme, we propose a two-parameter meandering classification schemen, namely: [1] the yearly Shannon wavelet based negentropy gradient (ΔSWT), and [2] a quantitative continuum of the degree of confinement, which is estimated from the dimensonless Frechet distance (δF*) between the meandering centerline curvature and that of the mean center. Our results show that δF* identifies a threshold of ˜650 to discriminate freely from confined rivers; thereby, scales of the second and third degree of confinement are quantified. Likewise, the proxy parameter ΔSWT suggests that there are 4 degrees of meandering morphodynamics which lay in the intervals [10-1-100], [100-101], [101-102], and [102-103]. Our results also suggest that the lower negentropy corresponds to G1 meanders (two phase, bimodal bankfull sinuosity, equiwidth) and class B2 (single phase, wider at bends, no bars). Class G2 (two phase, bimodal bankfull sinuosity, wider at bends with point bars) and class C (single phase wider a bends, no bars) exhibit higher negentropy (single phase wider at bends width point bars). Likewise, the middle-negentropy group is comprised by both confined meanders (B1, single phase and equiwidth channel, and D, single phase, wider at bends with point bars and chutes) and

  19. Resource-efficient wireless monitoring based on mobile agent migration

    NASA Astrophysics Data System (ADS)

    Smarsly, Kay; Law, Kincho H.; König, Markus

    2011-04-01

    Wireless sensor networks are increasingly adopted in many engineering applications such as environmental and structural monitoring. Having proven to be low-cost, easy to install and accurate, wireless sensor networks serve as a powerful alternative to traditional tethered monitoring systems. However, due to the limited resources of a wireless sensor node, critical problems are the power-consuming transmission of the collected sensor data and the usage of onboard memory of the sensor nodes. This paper presents a new approach towards resource-efficient wireless sensor networks based on a multi-agent paradigm. In order to efficiently use the restricted computing resources, software agents are embedded in the wireless sensor nodes. On-board agents are designed to autonomously collect, analyze and condense the data sets using relatively simple yet resource-efficient algorithms. If having detected (potential) anomalies in the observed structural system, the on-board agents explicitly request specialized software agents. These specialized agents physically migrate from connected computer systems, or adjacent nodes, to the respective sensor node in order to perform more complex damage detection analyses based on their inherent expert knowledge. A prototype system is designed and implemented, deploying multi-agent technology and dynamic code migration, in a wireless sensor network for structural health monitoring. Laboratory tests are conducted to validate the performance of the agent-based wireless structural health monitoring system and to verify its autonomous damage detection capabilities.

  20. A simulation-based tutor that reasons about multiple agents

    SciTech Connect

    Rhodes Eliot, C. III; Park Woolf, B.

    1996-12-31

    This paper examines the problem of modeling multiple agents within an intelligent simulation-based tutor. Multiple agent and planning technology were used to enable the system to critique a human agent`s reasoning about multiple agents. This perspective arises naturally whenever a student must learn to lead and coordinate a team of people. The system dynamically selected teaching goals, instantiated plans and modeled the student and the domain as it monitored the student`s progress. The tutor provides one of the first complete integrations of a real-time simulation with knowledge-based reasoning. Other novel techniques of the system are reported, such as common-sense reasoning about plans, reasoning about protocol mechanisms, and using a real-time simulation for training.

  1. GECC: Gene Expression Based Ensemble Classification of Colon Samples.

    PubMed

    Rathore, Saima; Hussain, Mutawarra; Khan, Asifullah

    2014-01-01

    Gene expression deviates from its normal composition in case a patient has cancer. This variation can be used as an effective tool to find cancer. In this study, we propose a novel gene expressions based colon classification scheme (GECC) that exploits the variations in gene expressions for classifying colon gene samples into normal and malignant classes. Novelty of GECC is in two complementary ways. First, to cater overwhelmingly larger size of gene based data sets, various feature extraction strategies, like, chi-square, F-Score, principal component analysis (PCA) and minimum redundancy and maximum relevancy (mRMR) have been employed, which select discriminative genes amongst a set of genes. Second, a majority voting based ensemble of support vector machine (SVM) has been proposed to classify the given gene based samples. Previously, individual SVM models have been used for colon classification, however, their performance is limited. In this research study, we propose an SVM-ensemble based new approach for gene based classification of colon, wherein the individual SVM models are constructed through the learning of different SVM kernels, like, linear, polynomial, radial basis function (RBF), and sigmoid. The predicted results of individual models are combined through majority voting. In this way, the combined decision space becomes more discriminative. The proposed technique has been tested on four colon, and several other binary-class gene expression data sets, and improved performance has been achieved compared to previously reported gene based colon cancer detection techniques. The computational time required for the training and testing of 208 × 5,851 data set has been 591.01 and 0.019 s, respectively. PMID:26357050

  2. A science based approach to topical drug classification system (TCS).

    PubMed

    Shah, Vinod P; Yacobi, Avraham; Rădulescu, Flavian Ştefan; Miron, Dalia Simona; Lane, Majella E

    2015-08-01

    The Biopharmaceutics Classification System (BCS) for oral immediate release solid drug products has been very successful; its implementation in drug industry and regulatory approval has shown significant progress. This has been the case primarily because BCS was developed using sound scientific judgment. Following the success of BCS, we have considered the topical drug products for similar classification system based on sound scientific principles. In USA, most of the generic topical drug products have qualitatively (Q1) and quantitatively (Q2) same excipients as the reference listed drug (RLD). The applications of in vitro release (IVR) and in vitro characterization are considered for a range of dosage forms (suspensions, creams, ointments and gels) of differing strengths. We advance a Topical Drug Classification System (TCS) based on a consideration of Q1, Q2 as well as the arrangement of matter and microstructure of topical formulations (Q3). Four distinct classes are presented for the various scenarios that may arise and depending on whether biowaiver can be granted or not. PMID:26070249

  3. The DTW-based representation space for seismic pattern classification

    NASA Astrophysics Data System (ADS)

    Orozco-Alzate, Mauricio; Castro-Cabrera, Paola Alexandra; Bicego, Manuele; Londoño-Bonilla, John Makario

    2015-12-01

    Distinguishing among the different seismic volcanic patterns is still one of the most important and labor-intensive tasks for volcano monitoring. This task could be lightened and made free from subjective bias by using automatic classification techniques. In this context, a core but often overlooked issue is the choice of an appropriate representation of the data to be classified. Recently, it has been suggested that using a relative representation (i.e. proximities, namely dissimilarities on pairs of objects) instead of an absolute one (i.e. features, namely measurements on single objects) is advantageous to exploit the relational information contained in the dissimilarities to derive highly discriminant vector spaces, where any classifier can be used. According to that motivation, this paper investigates the suitability of a dynamic time warping (DTW) dissimilarity-based vector representation for the classification of seismic patterns. Results show the usefulness of such a representation in the seismic pattern classification scenario, including analyses of potential benefits from recent advances in the dissimilarity-based paradigm such as the proper selection of representation sets and the combination of different dissimilarity representations that might be available for the same data.

  4. Perceptually based techniques for semantic image classification and retrieval

    NASA Astrophysics Data System (ADS)

    Depalov, Dejan; Pappas, Thrasyvoulos; Li, Dongge; Gandhi, Bhavan

    2006-02-01

    The accumulation of large collections of digital images has created the need for efficient and intelligent schemes for content-based image retrieval. Our goal is to organize the contents semantically, according to meaningful categories. We present a new approach for semantic classification that utilizes a recently proposed color-texture segmentation algorithm (by Chen et al.), which combines knowledge of human perception and signal characteristics to segment natural scenes into perceptually uniform regions. The color and texture features of these regions are used as medium level descriptors, based on which we extract semantic labels, first at the segment and then at the scene level. The segment features consist of spatial texture orientation information and color composition in terms of a limited number of locally adapted dominant colors. The focus of this paper is on region classification. We use a hierarchical vocabulary of segment labels that is consistent with those used in the NIST TRECVID 2003 development set. We test the approach on a database of 9000 segments obtained from 2500 photographs of natural scenes. For training and classification we use the Linear Discriminant Analysis (LDA) technique. We examine the performance of the algorithm (precision and recall rates) when different sets of features (e.g., one or two most dominant colors versus four quantized dominant colors) are used. Our results indicate that the proposed approach offers significant performance improvements over existing approaches.

  5. Changing Histopathological Diagnostics by Genome-Based Tumor Classification

    PubMed Central

    Kloth, Michael; Buettner, Reinhard

    2014-01-01

    Traditionally, tumors are classified by histopathological criteria, i.e., based on their specific morphological appearances. Consequently, current therapeutic decisions in oncology are strongly influenced by histology rather than underlying molecular or genomic aberrations. The increase of information on molecular changes however, enabled by the Human Genome Project and the International Cancer Genome Consortium as well as the manifold advances in molecular biology and high-throughput sequencing techniques, inaugurated the integration of genomic information into disease classification. Furthermore, in some cases it became evident that former classifications needed major revision and adaption. Such adaptations are often required by understanding the pathogenesis of a disease from a specific molecular alteration, using this molecular driver for targeted and highly effective therapies. Altogether, reclassifications should lead to higher information content of the underlying diagnoses, reflecting their molecular pathogenesis and resulting in optimized and individual therapeutic decisions. The objective of this article is to summarize some particularly important examples of genome-based classification approaches and associated therapeutic concepts. In addition to reviewing disease specific markers, we focus on potentially therapeutic or predictive markers and the relevance of molecular diagnostics in disease monitoring. PMID:24879454

  6. Image classification based on region of interest detection

    NASA Astrophysics Data System (ADS)

    Zhou, Huabing; Zhang, Yanduo; Yu, Zhenghong

    2015-12-01

    For image classification tasks, the region containing object which plays a decisive role is indefinite in both position and scale. In this case, it does not seem quite appropriate to use the spatial pyramid matching (SPM) approach directly. In this paper, we describe an approach for handling this problem based on region of interest (ROI) detection. It verifies the feasibility of using a state-of-the-art object detection algorithm to separate foreground and background for image classification. It first makes use of an object detection algorithm to separate an image into object and scene regions, and then constructs spatial histogram features for them separately based on SPM. Moreover, the detection score is used to rescore. Our contributions include: i) verify the feasibility of using a state-of-the-art object detection algorithm to separate foreground and background used for image classification; ii) a simple method, called coarse object alignment matching, is proposed for constructing histogram using the foreground and background provided by object localization. Experimental results demonstrate an obvious superiority of our approach over the standard SPM method, and it also outperforms many state-of-the-art methods for several categories.

  7. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements. PMID:24778587

  8. An ellipse detection algorithm based on edge classification

    NASA Astrophysics Data System (ADS)

    Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.

  9. Simple-Random-Sampling-Based Multiclass Text Classification Algorithm

    PubMed Central

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements. PMID:24778587

  10. Agent-based method for distributed clustering of textual information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  11. A comprehensive classification of nucleic acid structural families based on strand direction and base pairing.

    PubMed Central

    Lavery, R; Zakrzewska, K; Sun, J S; Harvey, S C

    1992-01-01

    We propose a classification of DNA structures formed from 1 to 4 strands, based only on relative strand directions, base to strand orientation and base pairing geometries. This classification and its associated notation enable all nucleic acids to be grouped into structural families and bring to light possible structures which have not yet been observed experimentally. It also helps in understanding transitions between families and can assist in the design of multistrand structures. PMID:1383936

  12. Soil classification basing on the spectral characteristics of topsoil samples

    NASA Astrophysics Data System (ADS)

    Liu, Huanjun; Zhang, Xiaokang; Zhang, Xinle

    2016-04-01

    Soil taxonomy plays an important role in soil utility and management, but China has only course soil map created based on 1980s data. New technology, e.g. spectroscopy, could simplify soil classification. The study try to classify soils basing on the spectral characteristics of topsoil samples. 148 topsoil samples of typical soils, including Black soil, Chernozem, Blown soil and Meadow soil, were collected from Songnen plain, Northeast China, and the room spectral reflectance in the visible and near infrared region (400-2500 nm) were processed with weighted moving average, resampling technique, and continuum removal. Spectral indices were extracted from soil spectral characteristics, including the second absorption positions of spectral curve, the first absorption vale's area, and slope of spectral curve at 500-600 nm and 1340-1360 nm. Then K-means clustering and decision tree were used respectively to build soil classification model. The results indicated that 1) the second absorption positions of Black soil and Chernozem were located at 610 nm and 650 nm respectively; 2) the spectral curve of the meadow is similar to its adjacent soil, which could be due to soil erosion; 3) decision tree model showed higher classification accuracy, and accuracy of Black soil, Chernozem, Blown soil and Meadow are 100%, 88%, 97%, 50% respectively, and the accuracy of Blown soil could be increased to 100% by adding one more spectral index (the first two vole's area) to the model, which showed that the model could be used for soil classification and soil map in near future.

  13. AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*

    PubMed Central

    Bruch, Elizabeth; Atwell, Jon

    2014-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351

  14. Agent-Based Modeling of Growth Processes

    ERIC Educational Resources Information Center

    Abraham, Ralph

    2014-01-01

    Growth processes abound in nature, and are frequently the target of modeling exercises in the sciences. In this article we illustrate an agent-based approach to modeling, in the case of a single example from the social sciences: bullying.

  15. Rule based fuzzy logic approach for classification of fibromyalgia syndrome.

    PubMed

    Arslan, Evren; Yildiz, Sedat; Albayrak, Yalcin; Koklukaya, Etem

    2016-06-01

    Fibromyalgia syndrome (FMS) is a chronic muscle and skeletal system disease observed generally in women, manifesting itself with a widespread pain and impairing the individual's quality of life. FMS diagnosis is made based on the American College of Rheumatology (ACR) criteria. However, recently the employability and sufficiency of ACR criteria are under debate. In this context, several evaluation methods, including clinical evaluation methods were proposed by researchers. Accordingly, ACR had to update their criteria announced back in 1990, 2010 and 2011. Proposed rule based fuzzy logic method aims to evaluate FMS at a different angle as well. This method contains a rule base derived from the 1990 ACR criteria and the individual experiences of specialists. The study was conducted using the data collected from 60 inpatient and 30 healthy volunteers. Several tests and physical examination were administered to the participants. The fuzzy logic rule base was structured using the parameters of tender point count, chronic widespread pain period, pain severity, fatigue severity and sleep disturbance level, which were deemed important in FMS diagnosis. It has been observed that generally fuzzy predictor was 95.56 % consistent with at least of the specialists, who are not a creator of the fuzzy rule base. Thus, in diagnosis classification where the severity of FMS was classified as well, consistent findings were obtained from the comparison of interpretations and experiences of specialists and the fuzzy logic approach. The study proposes a rule base, which could eliminate the shortcomings of 1990 ACR criteria during the FMS evaluation process. Furthermore, the proposed method presents a classification on the severity of the disease, which was not available with the ACR criteria. The study was not limited to only disease classification but at the same time the probability of occurrence and severity was classified. In addition, those who were not suffering from FMS were

  16. The Study on Collaborative Manufacturing Platform Based on Agent

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-yan; Qu, Zheng-geng

    To fulfill the trends of knowledge-intensive in collaborative manufacturing development, we have described multi agent architecture supporting knowledge-based platform of collaborative manufacturing development platform. In virtue of wrapper service and communication capacity agents provided, the proposed architecture facilitates organization and collaboration of multi-disciplinary individuals and tools. By effectively supporting the formal representation, capture, retrieval and reuse of manufacturing knowledge, the generalized knowledge repository based on ontology library enable engineers to meaningfully exchange information and pass knowledge across boundaries. Intelligent agent technology increases traditional KBE systems efficiency and interoperability and provides comprehensive design environments for engineers.

  17. The fractional volatility model: An agent-based interpretation

    NASA Astrophysics Data System (ADS)

    Vilela Mendes, R.

    2008-06-01

    Based on the criteria of mathematical simplicity and consistency with empirical market data, a model with volatility driven by fractional noise has been constructed which provides a fairly accurate mathematical parametrization of the data. Here, some features of the model are reviewed and extended to account for leverage effects. Using agent-based models, one tries to find which agent strategies and (or) properties of the financial institutions might be responsible for the features of the fractional volatility model.

  18. Texture based classification of the severity of mitral regurgitation.

    PubMed

    Balodi, Arun; Dewal, M L; Anand, R S; Rawat, Anurag

    2016-06-01

    Clinically, the severity of valvular regurgitation is assessed by manual tracing of the regurgitant jet in the respective chambers. This work presents a computer-aided diagnostic (CAD) system for the assessment of the severity of mitral regurgitation (MR) based on image processing that does not require the intervention of the radiologist or clinician. Eight different texture feature sets from the regurgitant area (selected through an arbitrary criterion) have been used in the present approach. First order statistics have been used initially, however, observing their limitations, the other texture features such as spatial gray level difference matrix, gray level difference statistics, neighborhood gray tone difference matrix, statistical feature matrix, Laws' textures energy measure, fractal dimension texture analysis and Fourier power spectrum have additionally been used. For the classification task a supervised classifier i.e., support vector machine has been used in the present approach. The classification accuracy has been improved significantly by using these texture features in combination, in comparison to when fed individually as input to the classifier. The classification accuracy of 95.65±1.09, 95.65±1.09 and 95.36±1.13 has been obtained in apical two chamber, apical four chamber and parasternal long axis views, respectively. Therefore, the results of this paper indicate that the proposed CAD system may effectively assist the radiologists in establishing (confirming) the MR stages, namely, mild, moderate and severe. PMID:27127894

  19. Risk Classification and Risk-based Safety and Mission Assurance

    NASA Technical Reports Server (NTRS)

    Leitner, Jesse A.

    2014-01-01

    Recent activities to revamp and emphasize the need to streamline processes and activities for Class D missions across the agency have led to various interpretations of Class D, including the lumping of a variety of low-cost projects into Class D. Sometimes terms such as Class D minus are used. In this presentation, mission risk classifications will be traced to official requirements and definitions as a measure to ensure that projects and programs align with the guidance and requirements that are commensurate for their defined risk posture. As part of this, the full suite of risk classifications, formal and informal will be defined, followed by an introduction to the new GPR 8705.4 that is currently under review.GPR 8705.4 lays out guidance for the mission success activities performed at the Classes A-D for NPR 7120.5 projects as well as for projects not under NPR 7120.5. Furthermore, the trends in stepping from Class A into higher risk posture classifications will be discussed. The talk will conclude with a discussion about risk-based safety and mission assuranceat GSFC.

  20. Hippocampal shape analysis: surface-based representation and classification

    NASA Astrophysics Data System (ADS)

    Shen, Li; Ford, James; Makedon, Fillia; Saykin, Andrew

    2003-05-01

    Surface-based representation and classification techniques are studied for hippocampal shape analysis. The goal is twofold: (1) develop a new framework of salient feature extraction and accurate classification for 3D shape data; (2) detect hippocampal abnormalities in schizophrenia using this technique. A fine-scale spherical harmonic expansion is employed to describe a closed 3D surface object. The expansion can then easily be transformed to extract only shape information (i.e., excluding translation, rotation, and scaling) and create a shape descriptor comparable across different individuals. This representation captures shape features and is flexible enough to do shape modeling, identify statistical group differences, and generate similar synthetic shapes. Principal component analysis is used to extract a small number of independent features from high dimensional shape descriptors, and Fisher's linear discriminant is applied for pattern classification. This framework is shown to be able to perform well in distinguishing clear group differences as well as small and noisy group differences using simulated shape data. In addition, the application of this technique to real data indicates that group shape differences exist in hippocampi between healthy controls and schizophrenic patients.

  1. Fruit classification based on weighted score-level feature fusion

    NASA Astrophysics Data System (ADS)

    Kuang, Hulin; Hang Chan, Leanne Lai; Liu, Cairong; Yan, Hong

    2016-01-01

    We describe an object classification method based on weighted score-level feature fusion using learned weights. Our method is able to recognize 20 object classes in a customized fruit dataset. Although the fusion of multiple features is commonly used to distinguish variable object classes, the optimal combination of features is not well defined. Moreover, in these methods, most parameters used for feature extraction are not optimized and the contribution of each feature to an individual class is not considered when determining the weight of the feature. Our algorithm relies on optimizing a single feature during feature selection and learning the weight of each feature for an individual class from the training data using a linear support vector machine before the features are linearly combined with the weights at the score level. The optimal single feature is selected using cross-validation. The optimal combination of features is explored and tested experimentally using a customized fruit dataset with 20 object classes and a variety of complex backgrounds. The experiment results show that the proposed feature fusion method outperforms four state-of-the-art fruit classification algorithms and improves the classification accuracy when compared with some state-of-the-art feature fusion methods.

  2. Geographical classification of apple based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Zhao, Chunjiang; Peng, Yankun

    2013-05-01

    Attribute of apple according to geographical origin is often recognized and appreciated by the consumers. It is usually an important factor to determine the price of a commercial product. Hyperspectral imaging technology and supervised pattern recognition was attempted to discriminate apple according to geographical origins in this work. Hyperspectral images of 207 Fuji apple samples were collected by hyperspectral camera (400-1000nm). Principal component analysis (PCA) was performed on hyperspectral imaging data to determine main efficient wavelength images, and then characteristic variables were extracted by texture analysis based on gray level co-occurrence matrix (GLCM) from dominant waveband image. All characteristic variables were obtained by fusing the data of images in efficient spectra. Support vector machine (SVM) was used to construct the classification model, and showed excellent performance in classification results. The total classification rate had the high classify accuracy of 92.75% in the training set and 89.86% in the prediction sets, respectively. The overall results demonstrated that the hyperspectral imaging technique coupled with SVM classifier can be efficiently utilized to discriminate Fuji apple according to geographical origins.

  3. Gadolinium-Based Contrast Agent Accumulation and Toxicity: An Update.

    PubMed

    Ramalho, J; Semelka, R C; Ramalho, M; Nunes, R H; AlObaidy, M; Castillo, M

    2016-07-01

    In current practice, gadolinium-based contrast agents have been considered safe when used at clinically recommended doses in patients without severe renal insufficiency. The causal relationship between gadolinium-based contrast agents and nephrogenic systemic fibrosis in patients with renal insufficiency resulted in new policies regarding the administration of these agents. After an effective screening of patients with renal disease by performing either unenhanced or reduced-dose-enhanced studies in these patients and by using the most stable contrast agents, nephrogenic systemic fibrosis has been largely eliminated since 2009. Evidence of in vivo gadolinium deposition in bone tissue in patients with normal renal function is well-established, but recent literature showing that gadolinium might also deposit in the brain in patients with intact blood-brain barriers caught many individuals in the imaging community by surprise. The purpose of this review was to summarize the literature on gadolinium-based contrast agents, tying together information on agent stability and animal and human studies, and to emphasize that low-stability agents are the ones most often associated with brain deposition. PMID:26659341

  4. MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS

    EPA Science Inventory

    Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...

  5. Laser-based instrumentation for the detection of chemical agents

    SciTech Connect

    Hartford, A. Jr.; Sander, R.K.; Quigley, G.P.; Radziemski, L.J.; Cremers, D.A.

    1982-01-01

    Several laser-based techniques are being evaluated for the remote, point, and surface detection of chemical agents. Among the methods under investigation are optoacoustic spectroscopy, laser-induced breakdown spectroscopy (LIBS), and synchronous detection of laser-induced fluorescence (SDLIF). Optoacoustic detection has already been shown to be capable of extremely sensitive point detection. Its application to remote sensing of chemical agents is currently being evaluated. Atomic emission from the region of a laser-generated plasma has been used to identify the characteristic elements contained in nerve (P and F) and blister (S and Cl) agents. Employing this LIBS approach, detection of chemical agent simulants dispersed in air and adsorbed on a variety of surfaces has been achieved. Synchronous detection of laser-induced fluorescence provides an attractive alternative to conventional LIF, in that an artificial narrowing of the fluorescence emission is obtained. The application of this technique to chemical agent simulants has been successfully demonstrated. 19 figures.

  6. Agent based modeling of the coevolution of hostility and pacifism

    NASA Astrophysics Data System (ADS)

    Dalmagro, Fermin; Jimenez, Juan

    2015-01-01

    We propose a model based on a population of agents whose states represent either hostile or peaceful behavior. Randomly selected pairs of agents interact according to a variation of the Prisoners Dilemma game, and the probabilities that the agents behave aggressively or not are constantly updated by the model so that the agents that remain in the game are those with the highest fitness. We show that the population of agents oscillate between generalized conflict and global peace, without either reaching a stable state. We then use this model to explain some of the emergent behaviors in collective conflicts, by comparing the simulated results with empirical data obtained from social systems. In particular, using public data reports we show how the model precisely reproduces interesting quantitative characteristics of diverse types of armed conflicts, public protests, riots and strikes.

  7. In vitro antimicrobial activity of peroxide-based bleaching agents.

    PubMed

    Napimoga, Marcelo Henrique; de Oliveira, Rogério; Reis, André Figueiredo; Gonçalves, Reginaldo Bruno; Giannini, Marcelo

    2007-06-01

    Antibacterial activity of 4 commercial bleaching agents (Day White, Colgate Platinum, Whiteness 10% and 16%) on 6 oral pathogens (Streptococcus mutans, Streptococcus sobrinus, Streptococcus sanguinis, Candida albicans, Lactobacillus casei, and Lactobacillus acidophilus) and Staphylococcus aureus were evaluated. A chlorhexidine solution was used as a positive control, while distilled water was the negative control. Bleaching agents and control materials were inserted in sterilized stainless-steel cylinders that were positioned under inoculated agar plate (n = 4). After incubation according to the appropriate period of time for each microorganism, the inhibition zones were measured. Data were analyzed by 2-way analysis of variance and Tukey test (a = 0.05). All bleaching agents and the chlorhexidine solution produced antibacterial inhibition zones. Antimicrobial activity was dependent on peroxide-based bleaching agents. For most microorganisms evaluated, bleaching agents produced inhibition zones similar to or larger than that observed for chlorhexidine. C albicans, L casei, and L acidophilus were the most resistant microorganisms. PMID:17625621

  8. Multi-issue Agent Negotiation Based on Fairness

    NASA Astrophysics Data System (ADS)

    Zuo, Baohe; Zheng, Sue; Wu, Hong

    Agent-based e-commerce service has become a hotspot now. How to make the agent negotiation process quickly and high-efficiently is the main research direction of this area. In the multi-issue model, MAUT(Multi-attribute Utility Theory) or its derived theory usually consider little about the fairness of both negotiators. This work presents a general model of agent negotiation which considered the satisfaction of both negotiators via autonomous learning. The model can evaluate offers from the opponent agent based on the satisfaction degree, learn online to get the opponent's knowledge from interactive instances of history and negotiation of this time, make concessions dynamically based on fair object. Through building the optimal negotiation model, the bilateral negotiation achieved a higher efficiency and fairer deal.

  9. Agent-based scheduling system to achieve agility

    NASA Astrophysics Data System (ADS)

    Akbulut, Muhtar B.; Kamarthi, Sagar V.

    2000-12-01

    Today's competitive enterprises need to design, develop, and manufacture their products rapidly and inexpensively. Agile manufacturing has emerged as a new paradigm to meet these challenges. Agility requires, among many other things, scheduling and control software systems that are flexible, robust, and adaptive. In this paper a new agent-based scheduling system (ABBS) is developed to meet the challenges of an agile manufacturing system. In ABSS, unlike in the traditional approaches, information and decision making capabilities are distributed among the system entities called agents. In contrast with the most agent-based scheduling systems which commonly use a bidding approach, the ABBS employs a global performance monitoring strategy. A production-rate-based global performance metric which effectively assesses the system performance is developed to assist the agents' decision making process. To test the architecture, an agent-based discrete event simulation software is developed. The experiments performed using the simulation software yielded encouraging results in supporting the applicability of agent-based systems to address the scheduling and control needs of an agile manufacturing system.

  10. Performance verification of a LIF-LIDAR technique for stand-off detection and classification of biological agents

    NASA Astrophysics Data System (ADS)

    Wojtanowski, Jacek; Zygmunt, Marek; Muzal, Michał; Knysak, Piotr; Młodzianko, Andrzej; Gawlikowski, Andrzej; Drozd, Tadeusz; Kopczyński, Krzysztof; Mierczyk, Zygmunt; Kaszczuk, Mirosława; Traczyk, Maciej; Gietka, Andrzej; Piotrowski, Wiesław; Jakubaszek, Marcin; Ostrowski, Roman

    2015-04-01

    LIF (laser-induced fluorescence) LIDAR (light detection and ranging) is one of the very few promising methods in terms of long-range stand-off detection of air-borne biological particles. A limited classification of the detected material also appears as a feasible asset. We present the design details and hardware setup of the developed range-resolved multichannel LIF-LIDAR system. The device is based on two pulsed UV laser sources operating at 355 nm and 266 nm wavelength (3rd and 4th harmonic of Nd:YAG, Q-switched solid-state laser, respectively). Range-resolved fluorescence signals are collected in 28 channels of compound PMT sensor coupled with Czerny-Turner spectrograph. The calculated theoretical sensitivities are confronted with the results obtained during measurement field campaign. Classification efforts based on 28-digit fluorescence spectral signatures linear processing are also presented.

  11. A Chemistry-Based Classification for Peridotite Xenoliths

    NASA Astrophysics Data System (ADS)

    Block, K. A.; Ducea, M.; Raye, U.; Stern, R. J.; Anthony, E. Y.; Lehnert, K. A.

    2007-12-01

    The development of a petrological and geochemical database for mantle xenoliths is important for interpreting EarthScope geophysical results. Interpretation of compositional characteristics of xenoliths requires a sound basis for comparing geochemical results, even when no petrographic modes are available. Peridotite xenoliths are generally classified on the basis of mineralogy (Streckeisen, 1973) derived from point-counting methods. Modal estimates, particularly on heterogeneous samples, are conducted using various methodologies and are therefore subject to large statistical error. Also, many studies simply do not report the modes. Other classifications for peridotite xenoliths based on host matrix or tectonic setting (cratonic vs. non-cratonic) are poorly defined and provide little information on where samples from transitional settings fit within a classification scheme (e.g., xenoliths from circum-cratonic locations). We present here a classification for peridotite xenoliths based on bulk rock major element chemistry, which is one of the most common types of data reported in the literature. A chemical dataset of over 1150 peridotite xenoliths is compiled from two online geochemistry databases, the EarthChem Deep Lithosphere Dataset and from GEOROC (http://www.earthchem.org), and is downloaded with the rock names reported in the original publications. Ternary plots of combinations of the SiO2- CaO-Al2O3-MgO (SCAM) components display sharp boundaries that define the dunite, harzburgite, lherzolite, or wehrlite-pyroxenite fields and provide a graphical basis for classification. In addition, for the CaO-Al2O3-MgO (CAM) diagram, a boundary between harzburgite and lherzolite at approximately 19% CaO is defined by a plot of over 160 abyssal peridotite compositions calculated from observed modes using the methods of Asimow (1999) and Baker and Beckett (1999). We anticipate that our SCAM classification is a first step in the development of a uniform basis for

  12. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid

  13. S1 gene-based phylogeny of infectious bronchitis virus: An attempt to harmonize virus classification.

    PubMed

    Valastro, Viviana; Holmes, Edward C; Britton, Paul; Fusaro, Alice; Jackwood, Mark W; Cattoli, Giovanni; Monne, Isabella

    2016-04-01

    Infectious bronchitis virus (IBV) is the causative agent of a highly contagious disease that results in severe economic losses to the global poultry industry. The virus exists in a wide variety of genetically distinct viral types, and both phylogenetic analysis and measures of pairwise similarity among nucleotide or amino acid sequences have been used to classify IBV strains. However, there is currently no consensus on the method by which IBV sequences should be compared, and heterogeneous genetic group designations that are inconsistent with phylogenetic history have been adopted, leading to the confusing coexistence of multiple genotyping schemes. Herein, we propose a simple and repeatable phylogeny-based classification system combined with an unambiguous and rationale lineage nomenclature for the assignment of IBV strains. By using complete nucleotide sequences of the S1 gene we determined the phylogenetic structure of IBV, which in turn allowed us to define 6 genotypes that together comprise 32 distinct viral lineages and a number of inter-lineage recombinants. Because of extensive rate variation among IBVs, we suggest that the inference of phylogenetic relationships alone represents a more appropriate criterion for sequence classification than pairwise sequence comparisons. The adoption of an internationally accepted viral nomenclature is crucial for future studies of IBV epidemiology and evolution, and the classification scheme presented here can be updated and revised novel S1 sequences should become available. PMID:26883378

  14. Application of Bayesian Classification to Content-Based Data Management

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  15. An Agent-Based Interface to Terrestrial Ecological Forecasting

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Nemani, Ramakrishna; Pang, Wan-Lin; Votava, Petr; Etzioni, Oren

    2004-01-01

    This paper describes a flexible agent-based ecological forecasting system that combines multiple distributed data sources and models to provide near-real-time answers to questions about the state of the Earth system We build on novel techniques in automated constraint-based planning and natural language interfaces to automatically generate data products based on descriptions of the desired data products.

  16. The Development of Sugar-Based Anti-Melanogenic Agents

    PubMed Central

    Bin, Bum-Ho; Kim, Sung Tae; Bhin, Jinhyuk; Lee, Tae Ryong; Cho, Eun-Gyung

    2016-01-01

    The regulation of melanin production is important for managing skin darkness and hyperpigmentary disorders. Numerous anti-melanogenic agents that target tyrosinase activity/stability, melanosome maturation/transfer, or melanogenesis-related signaling pathways have been developed. As a rate-limiting enzyme in melanogenesis, tyrosinase has been the most attractive target, but tyrosinase-targeted treatments still pose serious potential risks, indicating the necessity of developing lower-risk anti-melanogenic agents. Sugars are ubiquitous natural compounds found in humans and other organisms. Here, we review the recent advances in research on the roles of sugars and sugar-related agents in melanogenesis and in the development of sugar-based anti-melanogenic agents. The proposed mechanisms of action of these agents include: (a) (natural sugars) disturbing proper melanosome maturation by inducing osmotic stress and inhibiting the PI3 kinase pathway and (b) (sugar derivatives) inhibiting tyrosinase maturation by blocking N-glycosylation. Finally, we propose an alternative strategy for developing anti-melanogenic sugars that theoretically reduce melanosomal pH by inhibiting a sucrose transporter and reduce tyrosinase activity by inhibiting copper incorporation into an active site. These studies provide evidence of the utility of sugar-based anti-melanogenic agents in managing skin darkness and curing pigmentary disorders and suggest a future direction for the development of physiologically favorable anti-melanogenic agents. PMID:27092497

  17. Agent-based services for B2B electronic commerce

    NASA Astrophysics Data System (ADS)

    Fong, Elizabeth; Ivezic, Nenad; Rhodes, Tom; Peng, Yun

    2000-12-01

    The potential of agent-based systems has not been realized yet, in part, because of the lack of understanding of how the agent technology supports industrial needs and emerging standards. The area of business-to-business electronic commerce (b2b e-commerce) is one of the most rapidly developing sectors of industry with huge impact on manufacturing practices. In this paper, we investigate the current state of agent technology and the feasibility of applying agent-based computing to b2b e-commerce in the circuit board manufacturing sector. We identify critical tasks and opportunities in the b2b e-commerce area where agent-based services can best be deployed. We describe an implemented agent-based prototype system to facilitate the bidding process for printed circuit board manufacturing and assembly. These activities are taking place within the Internet Commerce for Manufacturing (ICM) project, the NIST- sponsored project working with industry to create an environment where small manufacturers of mechanical and electronic components may participate competitively in virtual enterprises that manufacture printed circuit assemblies.

  18. Object-Based Greenhouse Classification from High Resolution Satellite Imagery: a Case Study Antalya-Turkey

    NASA Astrophysics Data System (ADS)

    Coslu, M.; Sonmez, N. K.; Koc-San, D.

    2016-06-01

    Pixel-based classification method is widely used with the purpose of detecting land use and land cover with remote sensing technology. Recently, object-based classification methods have begun to be used as well as pixel-based classification method on high resolution satellite imagery. In the studies conducted, it is indicated that object-based classification method has more successful results than other classification methods. While pixel-based classification method is performed according to the grey value of pixels, object-based classification process is executed by generating imagery segmentation and updatable rule sets. In this study, it was aimed to detect and map the greenhouses from object-based classification method by using high resolution satellite imagery. The study was carried out in the Antalya province which includes greenhouse intensively. The study consists of three main stages including segmentation, classification and accuracy assessment. At the first stage, which was segmentation, the most important part of the object-based imagery analysis; imagery segmentation was generated by using basic spectral bands of high resolution Worldview-2 satellite imagery. At the second stage, applying the nearest neighbour classifier to these generated segments classification process was executed, and a result map of the study area was generated. Finally, accuracy assessments were performed using land studies and digital data of the area. According to the research results, object-based greenhouse classification using high resolution satellite imagery had over 80% accuracy.

  19. Inorganic nanoparticle-based contrast agents for molecular imaging

    PubMed Central

    Cho, Eun Chul; Glaus, Charles; Chen, Jingyi; Welch, Michael J.; Xia, Younan

    2010-01-01

    Inorganic nanoparticles including semiconductor quantum dots, iron oxide nanoparticles, and gold nanoparticles have been developed as contrast agents for diagnostics by molecular imaging. Compared to traditional contrast agents, nanoparticles offer several advantages: their optical and magnetic properties can be tailored by engineering the composition, structure, size, and shape; their surfaces can be modified with ligands to target specific biomarkers of disease; the contrast enhancement provided can be equivalent to millions of molecular counterparts; and they can be integrated with a combination of different functions for multi-modal imaging. Here, we review recent advances in the development of contrast agents based on inorganic nanoparticles for molecular imaging, with a touch on contrast enhancement, surface modification, tissue targeting, clearance, and toxicity. As research efforts intensify, contrast agents based on inorganic nanoparticles that are highly sensitive, target-specific, and safe to use are expected to enter clinical applications in the near future. PMID:21074494

  20. Tutorial on agent-based modeling and simulation.

    SciTech Connect

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2005-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS is a third way of doing science besides deductive and inductive reasoning. Computational advances have made possible a growing number of agent-based applications in a variety of fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling consumer behavior to understanding the fall of ancient civilizations, to name a few. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing ABMS models, and provides some thoughts on the relationship between ABMS and traditional modeling techniques.

  1. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  2. Commercial Shot Classification Based on Multiple Features Combination

    NASA Astrophysics Data System (ADS)

    Liu, Nan; Zhao, Yao; Zhu, Zhenfeng; Ni, Rongrong

    This paper presents a commercial shot classification scheme combining well-designed visual and textual features to automatically detect TV commercials. To identify the inherent difference between commercials and general programs, a special mid-level textual descriptor is proposed, aiming to capture the spatio-temporal properties of the video texts typical of commercials. In addition, we introduce an ensemble-learning based combination method, named Co-AdaBoost, to interactively exploit the intrinsic relations between the visual and textual features employed.

  3. Feature selection gait-based gender classification under different circumstances

    NASA Astrophysics Data System (ADS)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  4. Classification Based on Hierarchical Linear Models: The Need for Incorporation of Social Contexts in Classification Analysis

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Qui

    2009-01-01

    Many areas in educational and psychological research involve the use of classification statistical analysis. For example, school districts might be interested in attaining variables that provide optimal prediction of school dropouts. In psychology, a researcher might be interested in the classification of a subject into a particular psychological…

  5. 3-Hydrazinoindolin-2-one derivatives: Chemical classification and investigation of their targets as anticancer agents.

    PubMed

    Ibrahim, Hany S; Abou-Seri, Sahar M; Abdel-Aziz, Hatem A

    2016-10-21

    Isatin is a well acknowledged pharmacophore in many clinically approved drugs used for treatment of cancer. 3-Hydrazinoindolin-2-one, as a derivative of isatin, represents a pharmacophore of an important class of biologically active pharmaceutical agents by virtue of their diverse biological activities. In this review, anticancer activity will be on focus for compounds derived from 3-hydrazinoindolin-2-one. They are classified according to their chemical structure into nine different classes. In each class, different compounds were browsed, showing their anticancer activity and their potential targets. Moreover, crystallographic data or docking studies were highlighted for some compounds, when available, to provide a deep understanding of their mechanisms of action. PMID:27391135

  6. Agent-based modeling and simulation Part 3 : desktop ABMS.

    SciTech Connect

    Macal, C. M.; North, M. J.; Decision and Information Sciences

    2007-01-01

    Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS 'is a third way of doing science,' in addition to traditional deductive and inductive reasoning (Axelrod 1997b). Computational advances have made possible a growing number of agent-based models across a variety of application domains. Applications range from modeling agent behavior in the stock market, supply chains, and consumer markets, to predicting the spread of epidemics, the threat of bio-warfare, and the factors responsible for the fall of ancient civilizations. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing agent models, and illustrates the development of a simple agent-based model of shopper behavior using spreadsheets.

  7. Agent-Based Simulations for Project Management

    NASA Technical Reports Server (NTRS)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  8. Evaluating Water Demand Using Agent-Based Modeling

    NASA Astrophysics Data System (ADS)

    Lowry, T. S.

    2004-12-01

    The supply and demand of water resources are functions of complex, inter-related systems including hydrology, climate, demographics, economics, and policy. To assess the safety and sustainability of water resources, planners often rely on complex numerical models that relate some or all of these systems using mathematical abstractions. The accuracy of these models relies on how well the abstractions capture the true nature of the systems interactions. Typically, these abstractions are based on analyses of observations and/or experiments that account only for the statistical mean behavior of each system. This limits the approach in two important ways: 1) It cannot capture cross-system disruptive events, such as major drought, significant policy change, or terrorist attack, and 2) it cannot resolve sub-system level responses. To overcome these limitations, we are developing an agent-based water resources model that includes the systems of hydrology, climate, demographics, economics, and policy, to examine water demand during normal and extraordinary conditions. Agent-based modeling (ABM) develops functional relationships between systems by modeling the interaction between individuals (agents), who behave according to a probabilistic set of rules. ABM is a "bottom-up" modeling approach in that it defines macro-system behavior by modeling the micro-behavior of individual agents. While each agent's behavior is often simple and predictable, the aggregate behavior of all agents in each system can be complex, unpredictable, and different than behaviors observed in mean-behavior models. Furthermore, the ABM approach creates a virtual laboratory where the effects of policy changes and/or extraordinary events can be simulated. Our model, which is based on the demographics and hydrology of the Middle Rio Grande Basin in the state of New Mexico, includes agent groups of residential, agricultural, and industrial users. Each agent within each group determines its water usage

  9. Nanochemistry of Protein-Based Delivery Agents.

    PubMed

    Rajendran, Subin R C K; Udenigwe, Chibuike C; Yada, Rickey Y

    2016-01-01

    The past decade has seen an increased interest in the conversion of food proteins into functional biomaterials, including their use for loading and delivery of physiologically active compounds such as nutraceuticals and pharmaceuticals. Proteins possess a competitive advantage over other platforms for the development of nanodelivery systems since they are biocompatible, amphipathic, and widely available. Proteins also have unique molecular structures and diverse functional groups that can be selectively modified to alter encapsulation and release properties. A number of physical and chemical methods have been used for preparing protein nanoformulations, each based on different underlying protein chemistry. This review focuses on the chemistry of the reorganization and/or modification of proteins into functional nanostructures for delivery, from the perspective of their preparation, functionality, stability and physiological behavior. PMID:27489854

  10. Nanochemistry of Protein-Based Delivery Agents

    PubMed Central

    Rajendran, Subin R. C. K.; Udenigwe, Chibuike C.; Yada, Rickey Y.

    2016-01-01

    The past decade has seen an increased interest in the conversion of food proteins into functional biomaterials, including their use for loading and delivery of physiologically active compounds such as nutraceuticals and pharmaceuticals. Proteins possess a competitive advantage over other platforms for the development of nanodelivery systems since they are biocompatible, amphipathic, and widely available. Proteins also have unique molecular structures and diverse functional groups that can be selectively modified to alter encapsulation and release properties. A number of physical and chemical methods have been used for preparing protein nanoformulations, each based on different underlying protein chemistry. This review focuses on the chemistry of the reorganization and/or modification of proteins into functional nanostructures for delivery, from the perspective of their preparation, functionality, stability and physiological behavior. PMID:27489854

  11. Automated object-based classification of topography from SRTM data

    NASA Astrophysics Data System (ADS)

    Drăguţ, Lucian; Eisank, Clemens

    2012-03-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.

  12. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  13. Pixel classification based color image segmentation using quaternion exponent moments.

    PubMed

    Wang, Xiang-Yang; Wu, Zhi-Fang; Chen, Liang; Zheng, Hong-Liang; Yang, Hong-Ying

    2016-02-01

    Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. In this paper, we propose a pixel classification based color image segmentation using quaternion exponent moments. Firstly, the pixel-level image feature is extracted based on quaternion exponent moments (QEMs), which can capture effectively the image pixel content by considering the correlation between different color channels. Then, the pixel-level image feature is used as input of twin support vector machines (TSVM) classifier, and the TSVM model is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained TSVM model. The proposed scheme has the following advantages: (1) the effective QEMs is introduced to describe color image pixel content, which considers the correlation between different color channels, (2) the excellent TSVM classifier is utilized, which has lower computation time and higher classification accuracy. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature. PMID:26618250

  14. Classification of emerald based on multispectral image and PCA

    NASA Astrophysics Data System (ADS)

    Yang, Weiping; Zhao, Dazun; Huang, Qingmei; Ren, Pengyuan; Feng, Jie; Zhang, Xiaoyan

    2005-02-01

    Traditionally, the grade discrimination and classifying of bowlders (emeralds) are implemented by using methods based on people's experiences. In our previous works, a method based on NCS(Natural Color System) color system and sRGB color space conversion is employed for a coarse grade classification of emeralds. However, it is well known that the color match of two colors is not a true "match" unless their spectra are the same. Because metameric colors can not be differentiated by a three channel(RGB) camera, a multispectral camera(MSC) is used as image capturing device in this paper. It consists of a trichromatic digital camera and a set of wide-band filters. The spectra are obtained by measuring a series of natural bowlders(emeralds) samples. Principal component analysis(PCA) method is employed to get some spectral eigenvectors. During the fine classification, the color difference and RMS of spectrum difference between estimated and original spectra are used as criterion. It has been shown that 6 eigenvectors are enough to reconstruct reflection spectra of the testing samples.

  15. Style-based classification of Chinese ink and wash paintings

    NASA Astrophysics Data System (ADS)

    Sheng, Jiachuan; Jiang, Jianmin

    2013-09-01

    Following the fact that a large collection of ink and wash paintings (IWP) is being digitized and made available on the Internet, their automated content description, analysis, and management are attracting attention across research communities. While existing research in relevant areas is primarily focused on image processing approaches, a style-based algorithm is proposed to classify IWPs automatically by their authors. As IWPs do not have colors or even tones, the proposed algorithm applies edge detection to locate the local region and detect painting strokes to enable histogram-based feature extraction and capture of important cues to reflect the styles of different artists. Such features are then applied to drive a number of neural networks in parallel to complete the classification, and an information entropy balanced fusion is proposed to make an integrated decision for the multiple neural network classification results in which the entropy is used as a pointer to combine the global and local features. Evaluations via experiments support that the proposed algorithm achieves good performances, providing excellent potential for computerized analysis and management of IWPs.

  16. Sparse graph-based transduction for image classification

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Yang, Dan; Zhou, Jia; Huangfu, Lunwen; Zhang, Xiaohong

    2015-03-01

    Motivated by the remarkable successes of graph-based transduction (GT) and sparse representation (SR), we present a classifier named sparse graph-based classifier (SGC) for image classification. In SGC, SR is leveraged to measure the correlation (similarity) of every two samples and a graph is constructed for encoding these correlations. Then the Laplacian eigenmapping is adopted for deriving the graph Laplacian of the graph. Finally, SGC can be obtained by plugging the graph Laplacian into the conventional GT framework. In the image classification procedure, SGC utilizes the correlations which are encoded in the learned graph Laplacian, to infer the labels of unlabeled images. SGC inherits the merits of both GT and SR. Compared to SR, SGC improves the robustness and the discriminating power of GT. Compared to GT, SGC sufficiently exploits the whole data. Therefore, it alleviates the undercomplete dictionary issue suffered by SR. Four popular image databases are employed for evaluation. The results demonstrate that SGC can achieve a promising performance in comparison with the state-of-the-art classifiers, particularly in the small training sample size case and the noisy sample case.

  17. ECG-based heartbeat classification for arrhythmia detection: A survey.

    PubMed

    Luz, Eduardo José da S; Schwartz, William Robson; Cámara-Chávez, Guillermo; Menotti, David

    2016-04-01

    An electrocardiogram (ECG) measures the electric activity of the heart and has been widely used for detecting heart diseases due to its simplicity and non-invasive nature. By analyzing the electrical signal of each heartbeat, i.e., the combination of action impulse waveforms produced by different specialized cardiac tissues found in the heart, it is possible to detect some of its abnormalities. In the last decades, several works were developed to produce automatic ECG-based heartbeat classification methods. In this work, we survey the current state-of-the-art methods of ECG-based automated abnormalities heartbeat classification by presenting the ECG signal preprocessing, the heartbeat segmentation techniques, the feature description methods and the learning algorithms used. In addition, we describe some of the databases used for evaluation of methods indicated by a well-known standard developed by the Association for the Advancement of Medical Instrumentation (AAMI) and described in ANSI/AAMI EC57:1998/(R)2008 (ANSI/AAMI, 2008). Finally, we discuss limitations and drawbacks of the methods in the literature presenting concluding remarks and future challenges, and also we propose an evaluation process workflow to guide authors in future works. PMID:26775139

  18. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  19. No-reference image quality metric based on image classification

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Lee, Chulhee

    2011-12-01

    In this article, we present a new no-reference (NR) objective image quality metric based on image classification. We also propose a new blocking metric and a new blur metric. Both metrics are NR metrics since they need no information from the original image. The blocking metric was computed by considering that the visibility of horizontal and vertical blocking artifacts can change depending on background luminance levels. When computing the blur metric, we took into account the fact that blurring in edge regions is generally more sensitive to the human visual system. Since different compression standards usually produce different compression artifacts, we classified images into two classes using the proposed blocking metric: one class that contained blocking artifacts and another class that did not contain blocking artifacts. Then, we used different quality metrics based on the classification results. Experimental results show that each metric correlated well with subjective ratings, and the proposed NR image quality metric consistently provided good performance with various types of content and distortions.

  20. Peatland classification of West Siberia based on Landsat imagery

    NASA Astrophysics Data System (ADS)

    Terentieva, I.; Glagolev, M.; Lapshina, E.; Maksyutov, S. S.

    2014-12-01

    Increasing interest in peatlands for prediction of environmental changes requires an understanding of its geographical distribution. West Siberia Plain is the biggest peatland area in Eurasia and is situated in the high latitudes experiencing enhanced rate of climate change. West Siberian taiga mires are important globally, accounting for about 12.5% of the global wetland area. A number of peatland maps of the West Siberia was developed in 1970s, but their accuracy is limited. Here we report the effort in mapping West Siberian peatlands using 30 m resolution Landsat imagery. As a first step, peatland classification scheme oriented on environmental parameter upscaling was developed. The overall workflow involves data pre-processing, training data collection, image classification on a scene-by-scene basis, regrouping of the derived classes into final peatland types and accuracy assessment. To avoid misclassification peatlands were distinguished from other landscapes using threshold method: for each scene, Green-Red Vegetation Indices was used for peatland masking and 5th channel was used for masking water bodies. Peatland image masks were made in Quantum GIS, filtered in MATLAB and then classified in Multispec (Purdue Research Foundation) using maximum likelihood algorithm of supervised classification method. Training sample selection was mostly based on spectral signatures due to limited ancillary and high-resolution image data. As an additional source of information, we applied our field knowledge resulting from more than 10 years of fieldwork in West Siberia summarized in an extensive dataset of botanical relevés, field photos, pH and electrical conductivity data from 40 test sites. After the classification procedure, discriminated spectral classes were generalized into 12 peatland types. Overall accuracy assessment was based on 439 randomly assigned test sites showing final map accuracy was 80%. Total peatland area was estimated at 73.0 Mha. Various ridge

  1. Intelligent Agent-Based Intrusion Detection System Using Enhanced Multiclass SVM

    PubMed Central

    Ganapathy, S.; Yogesh, P.; Kannan, A.

    2012-01-01

    Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set. PMID:23056036

  2. Agents and Data Mining in Bioinformatics: Joining Data Gathering and Automatic Annotation with Classification and Distributed Clustering

    NASA Astrophysics Data System (ADS)

    Bazzan, Ana L. C.

    Multiagent systems and data mining techniques are being frequently used in genome projects, especially regarding the annotation process (annotation pipeline). This paper discusses annotation-related problems where agent-based and/or distributed data mining has been successfully employed.

  3. Agent-based simulation of a financial market

    NASA Astrophysics Data System (ADS)

    Raberto, Marco; Cincotti, Silvano; Focardi, Sergio M.; Marchesi, Michele

    2001-10-01

    This paper introduces an agent-based artificial financial market in which heterogeneous agents trade one single asset through a realistic trading mechanism for price formation. Agents are initially endowed with a finite amount of cash and a given finite portfolio of assets. There is no money-creation process; the total available cash is conserved in time. In each period, agents make random buy and sell decisions that are constrained by available resources, subject to clustering, and dependent on the volatility of previous periods. The model proposed herein is able to reproduce the leptokurtic shape of the probability density of log price returns and the clustering of volatility. Implemented using extreme programming and object-oriented technology, the simulator is a flexible computational experimental facility that can find applications in both academic and industrial research projects.

  4. Utilizing ECG-Based Heartbeat Classification for Hypertrophic Cardiomyopathy Identification.

    PubMed

    Rahman, Quazi Abidur; Tereshchenko, Larisa G; Kongkatong, Matthew; Abraham, Theodore; Abraham, M Roselle; Shatkay, Hagit

    2015-07-01

    Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is (potentially fatally) obstructed. A test based on electrocardiograms (ECG) that record the heart electrical activity can help in early detection of HCM patients. This paper presents a cardiovascular-patient classifier we developed to identify HCM patients using standard 10-second, 12-lead ECG signals. Patients are classified as having HCM if the majority of their recorded heartbeats are recognized as characteristic of HCM. Thus, the classifier's underlying task is to recognize individual heartbeats segmented from 12-lead ECG signals as HCM beats, where heartbeats from non-HCM cardiovascular patients are used as controls. We extracted 504 morphological and temporal features—both commonly used and newly-developed ones—from ECG signals for heartbeat classification. To assess classification performance, we trained and tested a random forest classifier and a support vector machine classifier using 5-fold cross validation. We also compared the performance of these two classifiers to that obtained by a logistic regression classifier, and the first two methods performed better than logistic regression. The patient-classification precision of random forests and of support vector machine classifiers is close to 0.85. Recall (sensitivity) and specificity are approximately 0.90. We also conducted feature selection experiments by gradually removing the least informative features; the results show that a relatively small subset of 264 highly informative features can achieve performance measures comparable to those achieved by using the complete set of features. PMID:25915962

  5. Kernel-based machine learning techniques for infrasound signal classification

    NASA Astrophysics Data System (ADS)

    Tuma, Matthias; Igel, Christian; Mialle, Pierrick

    2014-05-01

    Infrasound monitoring is one of four remote sensing technologies continuously employed by the CTBTO Preparatory Commission. The CTBTO's infrasound network is designed to monitor the Earth for potential evidence of atmospheric or shallow underground nuclear explosions. Upon completion, it will comprise 60 infrasound array stations distributed around the globe, of which 47 were certified in January 2014. Three stages can be identified in CTBTO infrasound data processing: automated processing at the level of single array stations, automated processing at the level of the overall global network, and interactive review by human analysts. At station level, the cross correlation-based PMCC algorithm is used for initial detection of coherent wavefronts. It produces estimates for trace velocity and azimuth of incoming wavefronts, as well as other descriptive features characterizing a signal. Detected arrivals are then categorized into potentially treaty-relevant versus noise-type signals by a rule-based expert system. This corresponds to a binary classification task at the level of station processing. In addition, incoming signals may be grouped according to their travel path in the atmosphere. The present work investigates automatic classification of infrasound arrivals by kernel-based pattern recognition methods. It aims to explore the potential of state-of-the-art machine learning methods vis-a-vis the current rule-based and task-tailored expert system. To this purpose, we first address the compilation of a representative, labeled reference benchmark dataset as a prerequisite for both classifier training and evaluation. Data representation is based on features extracted by the CTBTO's PMCC algorithm. As classifiers, we employ support vector machines (SVMs) in a supervised learning setting. Different SVM kernel functions are used and adapted through different hyperparameter optimization routines. The resulting performance is compared to several baseline classifiers. All

  6. Macromolecular and Dendrimer Based Magnetic Resonance Contrast Agents

    PubMed Central

    Bumb, Ambika; Brechbiel, Martin W.; Choyke, Peter

    2010-01-01

    Magnetic resonance imaging (MRI) is a powerful imaging modality that can provide an assessment of function or molecular expression in tandem with anatomic detail. Over the last 20–25 years, a number of gadolinium based MR contrast agents have been developed to enhance signal by altering proton relaxation properties. This review explores a range of these agents from small molecule chelates, such as Gd-DTPA and Gd-DOTA, to macromolecular structures composed of albumin, polylysine, polysaccharides (dextran, inulin, starch), poly(ethylene glycol), copolymers of cystamine and cystine with GD-DTPA, and various dendritic structures based on polyamidoamine and polylysine (Gadomers). The synthesis, structure, biodistribution and targeting of dendrimer-based MR contrast agents are also discussed. PMID:20590365

  7. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  8. Bionanoconjugate-based composites for decontamination of nerve agents.

    PubMed

    Borkar, Indrakant V; Dinu, Cerasela Zoica; Zhu, Guangyu; Kane, Ravi S; Dordick, Jonathan S

    2010-01-01

    We have developed enzyme-based composites that rapidly and effectively detoxify simulants of V- and G-type chemical warfare nerve agents. The approach was based on the efficient immobilization of organophosphorus hydrolase onto carbon nanotubes to form active and stable conjugates that were easily entrapped in commercially available paints. The resulting catalytic-based composites showed no enzyme leaching and rendered >99% decontamination of 10 g/m(2) paraoxon, a simulant of the V-type nerve agent, in 30 minutes and >95% decontamination of diisopropylfluorophosphate, a simulant of G-type nerve agent, in 45 minutes. The formulations are expected to be environmentally friendly and to offer an easy to use, on demand, decontamination alternative to chemical approaches for sustainable material self-decontamination. PMID:20859933

  9. A procedure for blending manual and correlation-based synoptic classifications

    NASA Astrophysics Data System (ADS)

    Frakes, Brent; Yarnal, Brent

    1997-11-01

    Manual and correlation-based (also known as Lund or Kirchhofer) classifications are important to synoptic climatology, but both have significant drawbacks. Manual classifications are inherently subjective and labour intensive, whereas correlation-based classifications give the investigator little control over the map-patterns generated by the computer. This paper develops a simple procedure that combines these two classification methods, thereby minimizing these weaknesses. The hybrid procedure utilizes a relatively short-term manual classification to generate composite pressure surfaces, which are then used as seeds in a long-term correlation-based computer classification. Overall, the results show that the hybrid classification reproduces the manual classification while optimizing speed, objectivity and investigator control, thus suggesting that the hybrid procedure is superior to the manual or correlation classifications as they are currently used. More specifically, the results demonstrate little difference between the hybrid procedure and the original manual classification at monthly and longer time-scales, with less internal variation in the hybrid types than in the subjective categories. However, the two classifications showed substantial differences at the daily level, not because of poor performance by the hybrid procedure, but because of errors introduced by the subjectivity of the manual classification.

  10. Model-based classification of visual information for content-based retrieval

    NASA Astrophysics Data System (ADS)

    Jaimes, Alejandro; Chang, Shih-Fu

    1998-12-01

    Most existing approaches to content-based retrieval rely on query by example, or user sketch based on low-level features. However, these are not suitable for semantic (object level) distinctions. In other approaches, information is classified according to a predefined set of classes and classification is either performed manually or by using class-specific algorithms. Most of these systems lack flexibility: the user does not have the ability to define or change the classes, and new classification schemes require implementation of new class-specific algorithms and/or the input of an expert. In this paper, we present a different approach to content-based retrieval and a novel framework for classification of visual information, in which (1) users define their own visual classes and classifiers are learned automatically, and (multiple fuzzy-classifiers and machine learning techniques are combined for automatic classification at multiple levels (region, perceptual, object-part, object and scene). We present The Visual Apprentice, an implementation of our framework for still images and video that uses a combination of lazy-learning, decision trees, and evolution programs for classification and grouping. Our system is flexible, in that models can be changed by users over time, different types of classifiers are combined, and user-model definitions can be applied to object and scene structure classification. Special emphasis is placed on the difference between semantic and visual classes, and between classification and detection. Examples and results are presented to demonstrate the applicability of our approach to perform visual classification and detection.

  11. Knowledge-based classification of neuronal fibers in entire brain.

    PubMed

    Xia, Yan; Turken, U; Whitfield-Gabrieli, Susan L; Gabrieli, John D

    2005-01-01

    This work presents a framework driven by parcellation of brain gray matter in standard normalized space to classify the neuronal fibers obtained from diffusion tensor imaging (DTI) in entire human brain. Classification of fiber bundles into groups is an important step for the interpretation of DTI data in terms of functional correlates of white matter structures. Connections between anatomically delineated brain regions that are considered to form functional units, such as a short-term memory network, are identified by first clustering fibers based on their terminations in anatomically defined zones of gray matter according to Talairach Atlas, and then refining these groups based on geometric similarity criteria. Fiber groups identified this way can then be interpreted in terms of their functional properties using knowledge of functional neuroanatomy of individual brain regions specified in standard anatomical space, as provided by functional neuroimaging and brain lesion studies. PMID:16685847

  12. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  13. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  14. Lung sound classification using cepstral-based statistical features.

    PubMed

    Sengupta, Nandini; Sahidullah, Md; Saha, Goutam

    2016-08-01

    Lung sounds convey useful information related to pulmonary pathology. In this paper, short-term spectral characteristics of lung sounds are studied to characterize the lung sounds for the identification of associated diseases. Motivated by the success of cepstral features in speech signal classification, we evaluate five different cepstral features to recognize three types of lung sounds: normal, wheeze and crackle. Subsequently for fast and efficient classification, we propose a new feature set computed from the statistical properties of cepstral coefficients. Experiments are conducted on a dataset of 30 subjects using the artificial neural network (ANN) as a classifier. Results show that the statistical features extracted from mel-frequency cepstral coefficients (MFCCs) of lung sounds outperform commonly used wavelet-based features as well as standard cepstral coefficients including MFCCs. Further, we experimentally optimize different control parameters of the proposed feature extraction algorithm. Finally, we evaluate the features for noisy lung sound recognition. We have found that our newly investigated features are more robust than existing features and show better recognition accuracy even in low signal-to-noise ratios (SNRs). PMID:27286184

  15. Classification of knee arthropathy with accelerometer-based vibroarthrography.

    PubMed

    Moreira, Dinis; Silva, Joana; Correia, Miguel V; Massada, Marta

    2016-01-01

    One of the most common knee joint disorders is known as osteoarthritis which results from the progressive degeneration of cartilage and subchondral bone over time, affecting essentially elderly adults. Current evaluation techniques are either complex, expensive, invasive or simply fails into detection of small and progressive changes that occur within the knee. Vibroarthrography appeared as a new solution where the mechanical vibratory signals arising from the knee are recorded recurring only to an accelerometer and posteriorly analyzed enabling the differentiation between a healthy and an arthritic joint. In this study, a vibration-based classification system was created using a dataset with 92 healthy and 120 arthritic segments of knee joint signals collected from 19 healthy and 20 arthritic volunteers, evaluated with k-nearest neighbors and support vector machine classifiers. The best classification was obtained using the k-nearest neighbors classifier with only 6 time-frequency features with an overall accuracy of 89.8% and with a precision, recall and f-measure of 88.3%, 92.4% and 90.1%, respectively. Preliminary results showed that vibroarthrography can be a promising, non-invasive and low cost tool that could be used for screening purposes. Despite this encouraging results, several upgrades in the data collection process and analysis can be further implemented. PMID:27225550

  16. [Vegetation change in Shenzhen City based on NDVI change classification].

    PubMed

    Li, Yi-Jing; Zeng, Hui; Wel, Jian-Bing

    2008-05-01

    Based on the TM images of 1988 and 2003 as well as the land-use change survey data in 2004, the vegetation change in Shenzhen City was assessed by a NDVI (normalized difference vegetation index) change classification method, and the impacts from natural and social constraining factors were analyzed. The results showed that as a whole, the rapid urbanization in 1988-2003 had less impact on the vegetation cover in the City, but in its plain areas with low altitude, the vegetation cover degraded more obviously. The main causes of the localized ecological degradation were the invasion of built-ups to woods and orchards, land transformation from woods to orchards at the altitude of above 100 m, and low percentage of green land in some built-ups areas. In the future, the protection and construction of vegetation in Shenzhen should focus on strengthening the protection and restoration of remnant woods, trying to avoid the built-ups' expansion to woods and orchards where are better vegetation-covered, rectifying the unreasonable orchard constructions at the altitude of above 100 m, and consolidating the greenbelt construction inside the built-ups. It was considered that the NDVI change classification method could work well in efficiently uncovering the trend of macroscale vegetation change, and avoiding the effect of random noise in data. PMID:18655594

  17. Texture-Based Automated Lithological Classification Using Aeromagenetic Anomaly Images

    USGS Publications Warehouse

    Shankar, Vivek

    2009-01-01

    This report consists of a thesis submitted to the faculty of the Department of Electrical and Computer Engineering, in partial fulfillment of the requirements for the degree of Master of Science, Graduate College, The University of Arizona, 2004 Aeromagnetic anomaly images are geophysical prospecting tools frequently used in the exploration of metalliferous minerals and hydrocarbons. The amplitude and texture content of these images provide a wealth of information to geophysicists who attempt to delineate the nature of the Earth's upper crust. These images prove to be extremely useful in remote areas and locations where the minerals of interest are concealed by basin fill. Typically, geophysicists compile a suite of aeromagnetic anomaly images, derived from amplitude and texture measurement operations, in order to obtain a qualitative interpretation of the lithological (rock) structure. Texture measures have proven to be especially capable of capturing the magnetic anomaly signature of unique lithological units. We performed a quantitative study to explore the possibility of using texture measures as input to a machine vision system in order to achieve automated classification of lithological units. This work demonstrated a significant improvement in classification accuracy over random guessing based on a priori probabilities. Additionally, a quantitative comparison between the performances of five classes of texture measures in their ability to discriminate lithological units was achieved.

  18. Computational hepatocellular carcinoma tumor grading based on cell nuclei classification

    PubMed Central

    Atupelage, Chamidu; Nagahashi, Hiroshi; Kimura, Fumikazu; Yamaguchi, Masahiro; Tokiya, Abe; Hashiguchi, Akinori; Sakamoto, Michiie

    2014-01-01

    Abstract. Hepatocellular carcinoma (HCC) is the most common histological type of primary liver cancer. HCC is graded according to the malignancy of the tissues. It is important to diagnose low-grade HCC tumors because these tissues have good prognosis. Image interpretation-based computer-aided diagnosis (CAD) systems have been developed to automate the HCC grading process. Generally, the HCC grade is determined by the characteristics of liver cell nuclei. Therefore, it is preferable that CAD systems utilize only liver cell nuclei for HCC grading. This paper proposes an automated HCC diagnosing method. In particular, it defines a pipeline-path that excludes nonliver cell nuclei in two consequent pipeline-modules and utilizes the liver cell nuclear features for HCC grading. The significance of excluding the nonliver cell nuclei for HCC grading is experimentally evaluated. Four categories of liver cell nuclear features were utilized for classifying the HCC tumors. Results indicated that nuclear texture is the dominant feature for HCC grading and others contribute to increase the classification accuracy. The proposed method was employed to classify a set of regions of interest selected from HCC whole slide images into five classes and resulted in a 95.97% correct classification rate. PMID:26158066

  19. Performance modeling of feature-based classification in SAR imagery

    NASA Astrophysics Data System (ADS)

    Boshra, Michael; Bhanu, Bir

    1998-09-01

    We present a novel method for modeling the performance of a vote-based approach for target classification in SAR imagery. In this approach, the geometric locations of the scattering centers are used to represent 2D model views of a 3D target for a specific sensor under a given viewing condition (azimuth, depression and squint angles). Performance of such an approach is modeled in the presence of data uncertainty, occlusion, and clutter. The proposed method captures the structural similarity between model views, which plays an important role in determining the classification performance. In particular, performance would improve if the model views are dissimilar and vice versa. The method consists of the following steps. In the first step, given a bound on data uncertainty, model similarity is determined by finding feature correspondence in the space of relative translations between each pair of model views. In the second step, statistical analysis is carried out in the vote, occlusion and clutter space, in order to determine the probability of misclassifying each model view. In the third step, the misclassification probability is averaged for all model views to estimate the probability-of-correct- identification (PCI) plot as a function of occlusion and clutter rates. Validity of the method is demonstrated by comparing predicted PCI plots with ones that are obtained experimentally. Results are presented using both XPATCH and MSTAR SAR data.

  20. Multispectral image analysis of forest (grassland) fire based on agent

    NASA Astrophysics Data System (ADS)

    Guan, Jiaying; Li, Deren; Guan, Zequn

    2001-09-01

    Now the developing research of Agent can help operators to do the routine assignments, by which we can economize the precious resources and improve the real-time image analysis of the computers. This paper firstly makes a brief introduction of the Agent conception. Then we make some discussions about the multispectral images of a certain area, which is based on the concept of Agent. The main objects of this paper are inspections of forest (grassland) fire. The purpose of this paper is to propose three stages with which Agent could monitor the wildly areas and make decision automatically, without operators' intervention. First stage, if the value of pixels are more than a given threshold, Agent will give the operators an alarm and notify the operators that there are something happened; Second stage, analyze data and self-learning; Third stage, according to the database and knowledge database, Agents make decisions. As the decisions will be influenced by many factors, so some models, such as heat sources model, weather model, fire model, vegetation model are needed.

  1. Adding ecosystem function to agent-based land use models

    PubMed Central

    Yadav, V.; Del Grosso, S.J.; Parton, W.J.; Malanson, G.P.

    2015-01-01

    The objective of this paper is to examine issues in the inclusion of simulations of ecosystem functions in agent-based models of land use decision-making. The reasons for incorporating these simulations include local interests in land fertility and global interests in carbon sequestration. Biogeochemical models are needed in order to calculate such fluxes. The Century model is described with particular attention to the land use choices that it can encompass. When Century is applied to a land use problem the combinatorial choices lead to a potentially unmanageable number of simulation runs. Century is also parameter-intensive. Three ways of including Century output in agent-based models, ranging from separately calculated look-up tables to agents running Century within the simulation, are presented. The latter may be most efficient, but it moves the computing costs to where they are most problematic. Concern for computing costs should not be a roadblock. PMID:26191077

  2. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    NASA Astrophysics Data System (ADS)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  3. Classification of genes based on gene expression analysis

    NASA Astrophysics Data System (ADS)

    Angelova, M.; Myers, C.; Faith, J.

    2008-05-01

    Systems biology and bioinformatics are now major fields for productive research. DNA microarrays and other array technologies and genome sequencing have advanced to the point that it is now possible to monitor gene expression on a genomic scale. Gene expression analysis is discussed and some important clustering techniques are considered. The patterns identified in the data suggest similarities in the gene behavior, which provides useful information for the gene functionalities. We discuss measures for investigating the homogeneity of gene expression data in order to optimize the clustering process. We contribute to the knowledge of functional roles and regulation of E. coli genes by proposing a classification of these genes based on consistently correlated genes in expression data and similarities of gene expression patterns. A new visualization tool for targeted projection pursuit and dimensionality reduction of gene expression data is demonstrated.

  4. Classification of genes based on gene expression analysis

    SciTech Connect

    Angelova, M. Myers, C. Faith, J.

    2008-05-15

    Systems biology and bioinformatics are now major fields for productive research. DNA microarrays and other array technologies and genome sequencing have advanced to the point that it is now possible to monitor gene expression on a genomic scale. Gene expression analysis is discussed and some important clustering techniques are considered. The patterns identified in the data suggest similarities in the gene behavior, which provides useful information for the gene functionalities. We discuss measures for investigating the homogeneity of gene expression data in order to optimize the clustering process. We contribute to the knowledge of functional roles and regulation of E. coli genes by proposing a classification of these genes based on consistently correlated genes in expression data and similarities of gene expression patterns. A new visualization tool for targeted projection pursuit and dimensionality reduction of gene expression data is demonstrated.

  5. Classification and thermal history of petroleum based on light hydrocarbons

    NASA Astrophysics Data System (ADS)

    Thompson, K. F. M.

    1983-02-01

    Classifications of oils and kerogens are described. Two indices are employed, termed the Heptane and IsoheptaneValues, based on analyses of gasoline-range hydrocarbons. The indices assess degree of paraffinicity. and allow the definition of four types of oil: normal, mature, supermature, and biodegraded. The values of these indices measured in sediment extracts are a function of maximum attained temperature and of kerogen type. Aliphatic and aromatic kerogens are definable. Only the extracts of sediments bearing aliphatic kerogens having a specific thermal history are identical to the normal oils which form the largest group (41%) in the sample set. This group was evidently generated at subsurface temperatures of the order of 138°-149°C, (280°-300°F) defined under specific conditions of burial history. It is suggested that all other petroleums are transformation products of normal oils.

  6. An Agent-based Framework for Web Query Answering.

    ERIC Educational Resources Information Center

    Wang, Huaiqing; Liao, Stephen; Liao, Lejian

    2000-01-01

    Discusses discrepancies between user queries on the Web and the answers provided by information sources; proposes an agent-based framework for Web mining tasks; introduces an object-oriented deductive data model and a flexible query language; and presents a cooperative mechanism for query answering. (Author/LRW)

  7. Adding ecosystem function to agent-based land use models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this paper is to examine issues in the inclusion of simulations of ecosystem functions in agent-based models of land use decision-making. The reasons for incorporating these simulations include local interests in land fertility and global interests in carbon sequestration. Biogeoche...

  8. EVA: Collaborative Distributed Learning Environment Based in Agents.

    ERIC Educational Resources Information Center

    Sheremetov, Leonid; Tellez, Rolando Quintero

    In this paper, a Web-based learning environment developed within the project called Virtual Learning Spaces (EVA, in Spanish) is presented. The environment is composed of knowledge, collaboration, consulting, experimentation, and personal spaces as a collection of agents and conventional software components working over the knowledge domains. All…

  9. Modeling civil violence: An agent-based computational approach

    PubMed Central

    Epstein, Joshua M.

    2002-01-01

    This article presents an agent-based computational model of civil violence. Two variants of the civil violence model are presented. In the first a central authority seeks to suppress decentralized rebellion. In the second a central authority seeks to suppress communal violence between two warring ethnic groups. PMID:11997450

  10. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification. PMID:22097877

  11. An Agent Based Model for Social Class Emergence

    NASA Astrophysics Data System (ADS)

    Yang, Xiaoxiang; Rodriguez Segura, Daniel; Lin, Fei; Mazilu, Irina

    We present an open system agent-based model to analyze the effects of education and the society-specific wealth transactions on the emergence of social classes. Building on previous studies, we use realistic functions to model how years of education affect the income level. Numerical simulations show that the fraction of an individual's total transactions that is invested rather than consumed can cause wealth gaps between different income brackets in the long run. In an attempt to incorporate the network effects, we also explore how the probability of interactions among agents depending on the spread of their income brackets affects wealth distribution.

  12. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  13. Structural modifications of quinoline-based antimalarial agents: Recent developments

    PubMed Central

    Bawa, Sandhya; Kumar, Suresh; Drabu, Sushma; Kumar, Rajiv

    2010-01-01

    Antimalarial drugs constitute a major part of antiprotozoal drugs and have been in practice for a long time. Antimalarial agents generally belong to the class of quinoline which acts by interfering with heme metabolism. The recent increase in development of chloroquine-resistant strains of Plasmodium falciparum and failure of vaccination program against malaria have fuelled the drug discovery program against this old and widespread disease. Quinoline and its related derivative comprise a class of heterocycles, which has been exploited immensely than any other nucleus for the development of potent antimalarial agents. Various chemical modifications of quinoline have been attempted to achieve analogs with potent antimalarial properties against sensitive as well as resistant strains of Plasmodium sp., together with minimal potential undesirable side effects. This review outlines essentially some of the recent chemical modifications undertaken for the development of potent antimalarial agents based on quinoline. PMID:21814435

  14. Techniques and Issues in Agent-Based Modeling Validation

    SciTech Connect

    Pullum, Laura L; Cui, Xiaohui

    2012-01-01

    Validation of simulation models is extremely important. It ensures that the right model has been built and lends confidence to the use of that model to inform critical decisions. Agent-based models (ABM) have been widely deployed in different fields for studying the collective behavior of large numbers of interacting agents. However, researchers have only recently started to consider the issues of validation. Compared to other simulation models, ABM has many differences in model development, usage and validation. An ABM is inherently easier to build than a classical simulation, but more difficult to describe formally since they are closer to human cognition. Using multi-agent models to study complex systems has attracted criticisms because of the challenges involved in their validation [1]. In this report, we describe the challenge of ABM validation and present a novel approach we recently developed for an ABM system.

  15. Agent-based reasoning for distributed multi-INT analysis

    NASA Astrophysics Data System (ADS)

    Inchiosa, Mario E.; Parker, Miles T.; Perline, Richard

    2006-05-01

    Fully exploiting the intelligence community's exponentially growing data resources will require computational approaches differing radically from those currently available. Intelligence data is massive, distributed, and heterogeneous. Conventional approaches requiring highly structured and centralized data will not meet this challenge. We report on a new approach, Agent-Based Reasoning (ABR). In NIST evaluations, the use of ABR software tripled analysts' solution speed, doubled accuracy, and halved perceived difficulty. ABR makes use of populations of fine-grained, locally interacting agents that collectively reason about intelligence scenarios in a self-organizing, "bottom-up" process akin to those found in biological and other complex systems. Reproduction rules allow agents to make inferences from multi-INT data, while movement rules organize information and optimize reasoning. Complementary deterministic and stochastic agent behaviors enhance reasoning power and flexibility. Agent interaction via small-world networks - such as are found in nervous systems, social networks, and power distribution grids - dramatically increases the rate of discovering intelligence fragments that usefully connect to yield new inferences. Small-world networks also support the distributed processing necessary to address intelligence community data challenges. In addition, we have found that ABR pre-processing can boost the performance of commercial text clustering software. Finally, we have demonstrated interoperability with Knowledge Engineering systems and seen that reasoning across diverse data sources can be a rich source of inferences.

  16. Efficient Agent-Based Models for Non-Genomic Evolution

    NASA Technical Reports Server (NTRS)

    Gupta, Nachi; Agogino, Adrian; Tumer, Kagan

    2006-01-01

    Modeling dynamical systems composed of aggregations of primitive proteins is critical to the field of astrobiological science involving early evolutionary structures and the origins of life. Unfortunately traditional non-multi-agent methods either require oversimplified models or are slow to converge to adequate solutions. This paper shows how to address these deficiencies by modeling the protein aggregations through a utility based multi-agent system. In this method each agent controls the properties of a set of proteins assigned to that agent. Some of these properties determine the dynamics of the system, such as the ability for some proteins to join or split other proteins, while additional properties determine the aggregation s fitness as a viable primitive cell. We show that over a wide range of starting conditions, there are mechanisins that allow protein aggregations to achieve high values of overall fitness. In addition through the use of agent-specific utilities that remain aligned with the overall global utility, we are able to reach these conclusions with 50 times fewer learning steps.

  17. Gd-HOPO Based High Relaxivity MRI Contrast Agents

    SciTech Connect

    Datta, Ankona; Raymond, Kenneth

    2008-11-06

    Tris-bidentate HOPO-based ligands developed in our laboratory were designed to complement the coordination preferences of Gd{sup 3+}, especially its oxophilicity. The HOPO ligands provide a hexadentate coordination environment for Gd{sup 3+} in which all he donor atoms are oxygen. Because Gd{sup 3+} favors eight or nine coordination, this design provides two to three open sites for inner-sphere water molecules. These water molecules rapidly exchange with bulk solution, hence affecting the relaxation rates of bulk water olecules. The parameters affecting the efficiency of these contrast agents have been tuned to improve contrast while still maintaining a high thermodynamic stability for Gd{sup 3+} binding. The Gd- HOPO-based contrast agents surpass current commercially available agents ecause of a higher number of inner-sphere water molecules, rapid exchange of inner-sphere water molecules via an associative mechanism, and a long electronic relaxation time. The contrast enhancement provided by these agents is at least twice that of commercial contrast gents, which are based on polyaminocarboxylate ligands.

  18. An innovative blazar classification based on radio jet kinematics

    NASA Astrophysics Data System (ADS)

    Hervet, O.; Boisson, C.; Sol, H.

    2016-07-01

    Context. Blazars are usually classified following their synchrotron peak frequency (νF(ν) scale) as high, intermediate, low frequency peaked BL Lacs (HBLs, IBLs, LBLs), and flat spectrum radio quasars (FSRQs), or, according to their radio morphology at large scale, FR I or FR II. However, the diversity of blazars is such that these classes seem insufficient to chart the specific properties of each source. Aims: We propose to classify a wide sample of blazars following the kinematic features of their radio jets seen in very long baseline interferometry (VLBI). Methods: For this purpose we use public data from the MOJAVE collaboration in which we select a sample of blazars with known redshift and sufficient monitoring to constrain apparent velocities. We selected 161 blazars from a sample of 200 sources. We identify three distinct classes of VLBI jets depending on radio knot kinematics: class I with quasi-stationary knots, class II with knots in relativistic motion from the radio core, and class I/II, intermediate, showing quasi-stationary knots at the jet base and relativistic motions downstream. Results: A notable result is the good overlap of this kinematic classification with the usual spectral classification; class I corresponds to HBLs, class II to FSRQs, and class I/II to IBLs/LBLs. We deepen this study by characterizing the physical parameters of jets from VLBI radio data. Hence we focus on the singular case of the class I/II by the study of the blazar BL Lac itself. Finally we show how the interpretation that radio knots are recollimation shocks is fully appropriate to describe the characteristics of these three classes.

  19. Sequence-Based Classification Using Discriminatory Motif Feature Selection

    PubMed Central

    Xiong, Hao; Capurso, Daniel; Sen, Śaunak; Segal, Mark R.

    2011-01-01

    Most existing methods for sequence-based classification use exhaustive feature generation, employing, for example, all -mer patterns. The motivation behind such (enumerative) approaches is to minimize the potential for overlooking important features. However, there are shortcomings to this strategy. First, practical constraints limit the scope of exhaustive feature generation to patterns of length , such that potentially important, longer () predictors are not considered. Second, features so generated exhibit strong dependencies, which can complicate understanding of derived classification rules. Third, and most importantly, numerous irrelevant features are created. These concerns can compromise prediction and interpretation. While remedies have been proposed, they tend to be problem-specific and not broadly applicable. Here, we develop a generally applicable methodology, and an attendant software pipeline, that is predicated on discriminatory motif finding. In addition to the traditional training and validation partitions, our framework entails a third level of data partitioning, a discovery partition. A discriminatory motif finder is used on sequences and associated class labels in the discovery partition to yield a (small) set of features. These features are then used as inputs to a classifier in the training partition. Finally, performance assessment occurs on the validation partition. Important attributes of our approach are its modularity (any discriminatory motif finder and any classifier can be deployed) and its universality (all data, including sequences that are unaligned and/or of unequal length, can be accommodated). We illustrate our approach on two nucleosome occupancy datasets and a protein solubility dataset, previously analyzed using enumerative feature generation. Our method achieves excellent performance results, with and without optimization of classifier tuning parameters. A Python pipeline implementing the approach is available at http

  20. 3-Nitrotriazole-based piperazides as potent antitrypanosomal agents.

    PubMed

    Papadopoulou, Maria V; Bloomer, William D; Rosenzweig, Howard S; O'Shea, Ivan P; Wilkinson, Shane R; Kaiser, Marcel

    2015-10-20

    Novel linear 3-nitro-1H-1,2,4-triazole-based piperazides were synthesized and evaluated as antitrypanosomal agents. In addition, some bisarylpiperazine-ethanones which were formed as by-products were also screened for antiparasitic activity. Most 3-nitrotriazole-based derivatives were potent and selective against Trypanosoma cruzi parasites, but only one displayed these desired properties against Trypanosoma brucei rhodesiense. Moreover, two 3-nitrotriazole-based chlorophenylpiperazides were moderately and selectively active against Leishmania donovani. Although the bisarylpiperazine-ethanones were active or moderately active against T. cruzi, none of them demonstrated an acceptable selectivity. In general, 3-nitrotriazole-based piperazides were less toxic to host L6 cells than the previously evaluated 3-nitrotriazole-based piperazines and seven of 13 were 1.54- to 31.2-fold more potent antichagasic agents than the reference drug benznidazole. Selected compounds showed good ADMET characteristics. One potent in vitro antichagasic compound (3) was tested in an acute murine model and demonstrated antichagasic activity after a 10-day treatment of 15 mg/kg/day. However, neither compound 3 nor benznidazole showed a statistically significant P value compared to control due to high variability in parasite burden among the untreated animals. Working as prodrugs, 3-nitrotriazole-based piperazides were excellent substrates of trypanosomal type I nitroreductases and constitute a novel class of potentially effective and more affordable antitrypanosomal agents. PMID:26363868

  1. [ECoG classification based on wavelet variance].

    PubMed

    Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin

    2013-06-01

    For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300

  2. Text Passage Retrieval Based on Colon Classification: Retrieval Performance.

    ERIC Educational Resources Information Center

    Shepherd, Michael A.

    1981-01-01

    Reports the results of experiments using colon classification for the analysis, representation, and retrieval of primary information from the full text of documents. Recall, precision, and search length measures indicate colon classification did not perform significantly better than Boolean or simple word occurrence systems. Thirteen references…

  3. Classification Based on Tree-Structured Allocation Rules

    ERIC Educational Resources Information Center

    Vaughn, Brandon K.; Wang, Qui

    2008-01-01

    The authors consider the problem of classifying an unknown observation into 1 of several populations by using tree-structured allocation rules. Although many parametric classification procedures are robust to certain assumption violations, there is need for classification procedures that can be used regardless of the group-conditional…

  4. An agent-based multilayer architecture for bioinformatics grids.

    PubMed

    Bartocci, Ezio; Cacciagrano, Diletta; Cannata, Nicola; Corradini, Flavio; Merelli, Emanuela; Milanesi, Luciano; Romano, Paolo

    2007-06-01

    Due to the huge volume and complexity of biological data available today, a fundamental component of biomedical research is now in silico analysis. This includes modelling and simulation of biological systems and processes, as well as automated bioinformatics analysis of high-throughput data. The quest for bioinformatics resources (including databases, tools, and knowledge) becomes therefore of extreme importance. Bioinformatics itself is in rapid evolution and dedicated Grid cyberinfrastructures already offer easier access and sharing of resources. Furthermore, the concept of the Grid is progressively interleaving with those of Web Services, semantics, and software agents. Agent-based systems can play a key role in learning, planning, interaction, and coordination. Agents constitute also a natural paradigm to engineer simulations of complex systems like the molecular ones. We present here an agent-based, multilayer architecture for bioinformatics Grids. It is intended to support both the execution of complex in silico experiments and the simulation of biological systems. In the architecture a pivotal role is assigned to an "alive" semantic index of resources, which is also expected to facilitate users' awareness of the bioinformatics domain. PMID:17695749

  5. Simulation of convoy of unmanned vehicles using agent based modeling

    NASA Astrophysics Data System (ADS)

    Sharma, Sharad; Singh, Harpreet; Gerhart, G. R.

    2007-10-01

    There has been an increasing interest of unmanned vehicles keeping the importance of defense and security. A few models for a convoy of unmanned vehicle exist in literature. The objective of this paper is to exploit agent based modeling technique for a convoy of unmanned vehicles where each vehicle is an agent. Using this approach, the convoy of vehicles reaches a specified goal from a starting point. Each agent is associated with number of sensors. The agents make intelligent decisions based on sensor inputs and at the same time maintaining their group capability and behavior. The simulation is done for a battlefield environment from a single starting point to a single goal. This approach can be extended for multiple starting points to reach multiple goals. The simulation gives the time taken by the convoy to reach a goal from its initial position. In the battlefield environment, commanders make various tactical decisions depending upon the location of an enemy outpost, minefields, number of soldiers in platoons, and barriers. The simulation can help the commander to make effective decisions depending on battlefield, convoy and obstacles to reach a particular goal. The paper describes the proposed approach and gives the simulation results. The paper also gives problems for future research in this area.

  6. Hepatobiliary MR Imaging with Gadolinium Based Contrast Agents

    PubMed Central

    Frydrychowicz, Alex; Lubner, Meghan G.; Brown, Jeffrey J.; Merkle, Elmar M.; Nagle, Scott K.; Rofsky, Neil M.; Reeder, Scott B.

    2011-01-01

    The advent of gadolinium-based “hepatobiliary” contrast agents offers new opportunities for diagnostic MRI and has triggered a great interest for innovative imaging approaches to the liver and bile ducts. In this review article we will discuss the imaging properties of the two gadolinium-based hepatobiliary contrast agents currently available in the USA, gadobenate dimeglumine and gadoxetic acid, as well as important pharmacokinetic differences that affect their diagnostic performance. We will review potential applications, protocol optimization strategies, as well as diagnostic pitfalls. A variety of illustrative case examples will be used to demonstrate the role of these agents in detection and characterization of liver lesions as well as for imaging the biliary system. Changes in MR protocols geared towards optimizing workflow and imaging quality will also be discussed. It is our aim that the information provided in this article will facilitate the optimal utilization of these agents, and will stimulate the reader‘s pursuit of new applications for future benefit. PMID:22334493

  7. Application of a Boltzmann-entropy-like concept in an agent-based multilane traffic model

    NASA Astrophysics Data System (ADS)

    Sugihakim, Ryan; Alatas, Husin

    2016-01-01

    We discuss the dynamics of an agent-based multilane traffic model using three defined rules. The dynamical characteristics of the model are described by a Boltzmann traffic entropy quantity adopting the concept of Boltzmann entropy in statistical physics. The results are analyzed using fundamental diagrams based on lane density, entropy and its derivative with respect to density. We show that there are three classifications of allowed initial to equilibrium state transition process out of four possibilities and demonstrate that density and entropy fluctuations occur during the transition from the initial to equilibrium states, exhibiting the well-known expected self-organization process. The related concept of entropy can therefore be considered as a new alternative quantity to describe the complexity of traffic dynamics.

  8. Dynamic Agent Classification and Tracking Using an Ad Hoc Mobile Acoustic Sensor Network

    NASA Astrophysics Data System (ADS)

    Friedlander, David; Griffin, Christopher; Jacobson, Noah; Phoha, Shashi; Brooks, Richard R.

    2003-12-01

    Autonomous networks of sensor platforms can be designed to interact in dynamic and noisy environments to determine the occurrence of specified transient events that define the dynamic process of interest. For example, a sensor network may be used for battlefield surveillance with the purpose of detecting, identifying, and tracking enemy activity. When the number of nodes is large, human oversight and control of low-level operations is not feasible. Coordination and self-organization of multiple autonomous nodes is necessary to maintain connectivity and sensor coverage and to combine information for better understanding the dynamics of the environment. Resource conservation requires adaptive clustering in the vicinity of the event. This paper presents methods for dynamic distributed signal processing using an ad hoc mobile network of microsensors to detect, identify, and track targets in noisy environments. They seamlessly integrate data from fixed and mobile platforms and dynamically organize platforms into clusters to process local data along the trajectory of the targets. Local analysis of sensor data is used to determine a set of target attribute values and classify the target. Sensor data from a field test in the Marine base at Twentynine Palms, Calif, was analyzed using the techniques described in this paper. The results were compared to "ground truth" data obtained from GPS receivers on the vehicles.

  9. Rule-based classification models of molecular autofluorescence.

    PubMed

    Su, Bo-Han; Tu, Yi-Shu; Lin, Olivia A; Harn, Yeu-Chern; Shen, Meng-Yu; Tseng, Yufeng J

    2015-02-23

    Fluorescence-based detection has been commonly used in high-throughput screening (HTS) assays. Autofluorescent compounds, which can emit light in the absence of artificial fluorescent markers, often interfere with the detection of fluorophores and result in false positive signals in these assays. This interference presents a major issue in fluorescence-based screening techniques. In an effort to reduce the time and cost that will be spent on prescreening of autofluorescent compounds, in silico autofluorescence prediction models were developed for selected fluorescence-based assays in this study. Five prediction models were developed based on the respective fluorophores used in these HTS assays, which absorb and emit light at specific wavelengths (excitation/emission): Alexa Fluor 350 (A350) (340 nm/450 nm), 7-amino-4-trifluoromethyl-coumarin (AFC) (405 nm/520 nm), Alexa Fluor 488 (A488) (480 nm/540 nm), Rhodamine (547 nm/598 nm), and Texas Red (547 nm/618 nm). The C5.0 rule-based classification algorithm and PubChem 2D chemical structure fingerprints were used to develop prediction models. To optimize the accuracies of these prediction models despite the highly imbalanced ratio of fluorescent versus nonfluorescent compounds presented in the collected data sets, oversampling and undersampling strategies were applied. The average final accuracy achieved for the training set was 97%, and that for the testing set was 92%. In addition, five external data sets were used to further validate the models. Ultimately, 14 representative structural features (or rules) were determined to efficiently predict autofluorescence in data sets containing both fluorescent and nonfluorescent compounds. Several cases were illustrated in this study to demonstrate the applicability of these rules. PMID:25625768

  10. Hydrological landscape classification: investigating the performance of HAND based landscape classifications in a central European meso-scale catchment

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Hrachowitz, M.; Fenicia, F.; Savenije, H. H. G.

    2011-11-01

    This paper presents a detailed performance and sensitivity analysis of a recently developed hydrological landscape classification method based on dominant runoff mechanisms. Three landscape classes are distinguished: wetland, hillslope and plateau, corresponding to three dominant hydrological regimes: saturation excess overland flow, storage excess sub-surface flow, and deep percolation. Topography, geology and land use hold the key to identifying these landscapes. The height above the nearest drainage (HAND) and the surface slope, which can be easily obtained from a digital elevation model, appear to be the dominant topographical controls for hydrological classification. In this paper several indicators for classification are tested as well as their sensitivity to scale and resolution of observed points (sample size). The best results are obtained by the simple use of HAND and slope. The results obtained compared well with the topographical wetness index. The HAND based landscape classification appears to be an efficient method to ''read the landscape'' on the basis of which conceptual models can be developed.

  11. An Improved Feature Selection Based on Effective Range for Classification

    PubMed Central

    Zhou, Shuang

    2014-01-01

    Feature selection is a key issue in the domain of machine learning and related fields. The results of feature selection can directly affect the classifier's classification accuracy and generalization performance. Recently, a statistical feature selection method named effective range based gene selection (ERGS) is proposed. However, ERGS only considers the overlapping area (OA) among effective ranges of each class for every feature; it fails to handle the problem of the inclusion relation of effective ranges. In order to overcome this limitation, a novel efficient statistical feature selection approach called improved feature selection based on effective range (IFSER) is proposed in this paper. In IFSER, an including area (IA) is introduced to characterize the inclusion relation of effective ranges. Moreover, the samples' proportion for each feature of every class in both OA and IA is also taken into consideration. Therefore, IFSER outperforms the original ERGS and some other state-of-the-art algorithms. Experiments on several well-known databases are performed to demonstrate the effectiveness of the proposed method. PMID:24688449

  12. Classification of Histological Images Based on the Stationary Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Nascimento, M. Z.; Neves, L.; Duarte, S. C.; Duarte, Y. A. S.; Ramos Batista, V.

    2015-01-01

    Non-Hodgkin lymphomas are of many distinct types, and different classification systems make it difficult to diagnose them correctly. Many of these systems classify lymphomas only based on what they look like under a microscope. In 2008 the World Health Organisation (WHO) introduced the most recent system, which also considers the chromosome features of the lymphoma cells and the presence of certain proteins on their surface. The WHO system is the one that we apply in this work. Herewith we present an automatic method to classify histological images of three types of non-Hodgkin lymphoma. Our method is based on the Stationary Wavelet Transform (SWT), and it consists of three steps: 1) extracting sub-bands from the histological image through SWT, 2) applying Analysis of Variance (ANOVA) to clean noise and select the most relevant information, 3) classifying it by the Support Vector Machine (SVM) algorithm. The kernel types Linear, RBF and Polynomial were evaluated with our method applied to 210 images of lymphoma from the National Institute on Aging. We concluded that the following combination led to the most relevant results: detail sub-band, ANOVA and SVM with Linear and RBF kernels.

  13. Automatic classification for pathological prostate images based on fractal analysis.

    PubMed

    Huang, Po-Whei; Lee, Cheng-Hsiung

    2009-07-01

    Accurate grading for prostatic carcinoma in pathological images is important to prognosis and treatment planning. Since human grading is always time-consuming and subjective, this paper presents a computer-aided system to automatically grade pathological images according to Gleason grading system which is the most widespread method for histological grading of prostate tissues. We proposed two feature extraction methods based on fractal dimension to analyze variations of intensity and texture complexity in regions of interest. Each image can be classified into an appropriate grade by using Bayesian, k-NN, and support vector machine (SVM) classifiers, respectively. Leave-one-out and k-fold cross-validation procedures were used to estimate the correct classification rates (CCR). Experimental results show that 91.2%, 93.7%, and 93.7% CCR can be achieved by Bayesian, k-NN, and SVM classifiers, respectively, for a set of 205 pathological prostate images. If our fractal-based feature set is optimized by the sequential floating forward selection method, the CCR can be promoted up to 94.6%, 94.2%, and 94.6%, respectively, using each of the above three classifiers. Experimental results also show that our feature set is better than the feature sets extracted from multiwavelets, Gabor filters, and gray-level co-occurrence matrix methods because it has a much smaller size and still keeps the most powerful discriminating capability in grading prostate images. PMID:19164082

  14. Depth classification based solely on incoherent sonar information

    NASA Astrophysics Data System (ADS)

    Miklovic, Donald W.

    2005-09-01

    Most work on sonar contact depth estimation has been based on deterministic, coherent propagation modeling of the sound channel, e.g., matched-field processing. This has met with limited success due to the inability to precisely predict the sound-pressure field in realistic scenarios. This paper addresses the problem of using probabilistic, incoherent information from the sonar itself for depth classification with active sonar, without having to depend on precise and accurate propagation models and ancillary environmental measurements. In particular, the problem of deciding whether or not a given contact is on the bottom based solely on the sonar data is looked at, i.e., without resorting to the use of any additional environmental measurements or predictive models. To do this, the in situ local bottom reverberation is used to calibrate the channel. Probability theory is explored to provide a theoretical basis for the development of a single-hypothesis decision metric to best exploit this information. The methods are tested on a combination of broadband sonar data and detailed ocean simulations. The particular metric proposed for this problem seems to be important for achieving good performance, and may be of some interest in its own right for other types of single-hypothesis decision problems.

  15. Artillery/mortar type classification based on detected acoustic transients

    NASA Astrophysics Data System (ADS)

    Morcos, Amir; Grasing, David; Desai, Sachi

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  16. Revisiting an old friend: manganese-based MRI contrast agents

    PubMed Central

    Pan, Dipanjan; Caruthers, Shelton D.; Senpan, Angana; Schmieder, Ann H.; Wickline, Samuel A.; Lanza, Gregory M.

    2011-01-01

    Non-invasive cellular and molecular imaging techniques are emerging as a multidisciplinary field that offers promise in understanding the components, processes, dynamics and therapies of disease at a molecular level. Magnetic resonance imaging (MRI) is an attractive technique due to the absence of radiation and high spatial resolution which makes it advantageous over techniques involving radioisotopes. Typically paramagnetic and superparamagnetic metals are used as contrast materials for MR based techniques. Gadolinium has been the predominant paramagnetic contrast metal until the discovery and association of the metal with nephrogenic systemic fibrosis (NSF) in some patients with severe renal or kidney disease. Manganese was one of the earliest reported examples of paramagnetic contrast material for MRI because of its efficient positive contrast enhancement. In this review manganese based contrast agent approaches will be presented with a particular emphasis on nanoparticulate agents. We have discussed both classically used small molecule based blood pool contrast agents and recently developed innovative nanoparticle-based strategies highlighting a number of successful molecular imaging examples. PMID:20860051

  17. A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina

    The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.

  18. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  19. Agent-based models in translational systems biology

    PubMed Central

    An, Gary; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram

    2013-01-01

    Effective translational methodologies for knowledge representation are needed in order to make strides against the constellation of diseases that affect the world today. These diseases are defined by their mechanistic complexity, redundancy, and nonlinearity. Translational systems biology aims to harness the power of computational simulation to streamline drug/device design, simulate clinical trials, and eventually to predict the effects of drugs on individuals. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggests that this modeling framework is well suited for translational systems biology. This review describes agent-based modeling and gives examples of its translational applications in the context of acute inflammation and wound healing. PMID:20835989

  20. Agent-Based Chemical Plume Tracing Using Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zarzhitsky, Dimitri; Spears, Diana; Thayer, David; Spears, William

    2004-01-01

    This paper presents a rigorous evaluation of a novel, distributed chemical plume tracing algorithm. The algorithm is a combination of the best aspects of the two most popular predecessors for this task. Furthermore, it is based on solid, formal principles from the field of fluid mechanics. The algorithm is applied by a network of mobile sensing agents (e.g., robots or micro-air vehicles) that sense the ambient fluid velocity and chemical concentration, and calculate derivatives. The algorithm drives the robotic network to the source of the toxic plume, where measures can be taken to disable the source emitter. This work is part of a much larger effort in research and development of a physics-based approach to developing networks of mobile sensing agents for monitoring, tracking, reporting and responding to hazardous conditions.

  1. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  2. Endogenizing geopolitical boundaries with agent-based modeling.

    PubMed

    Cederman, Lars-Erik

    2002-05-14

    Agent-based modeling promises to overcome the reification of actors. Whereas this common, but limiting, assumption makes a lot of sense during periods characterized by stable actor boundaries, other historical junctures, such as the end of the Cold War, exhibit far-reaching and swift transformations of actors' spatial and organizational existence. Moreover, because actors cannot be assumed to remain constant in the long run, analysis of macrohistorical processes virtually always requires "sociational" endogenization. This paper presents a series of computational models, implemented with the software package REPAST, which trace complex macrohistorical transformations of actors be they hierarchically organized as relational networks or as collections of symbolic categories. With respect to the former, dynamic networks featuring emergent compound actors with agent compartments represented in a spatial grid capture organizational domination of the territorial state. In addition, models of "tagged" social processes allows the analyst to show how democratic states predicate their behavior on categorical traits. Finally, categorical schemata that select out politically relevant cultural traits in ethnic landscapes formalize a constructivist notion of national identity in conformance with the qualitative literature on nationalism. This "finite-agent method", representing both states and nations as higher-level structures superimposed on a lower-level grid of primitive agents or cultural traits, avoids reification of agency. Furthermore, it opens the door to explicit analysis of entity processes, such as the integration and disintegration of actors as well as boundary transformations. PMID:12011409

  3. Endogenizing geopolitical boundaries with agent-based modeling

    PubMed Central

    Cederman, Lars-Erik

    2002-01-01

    Agent-based modeling promises to overcome the reification of actors. Whereas this common, but limiting, assumption makes a lot of sense during periods characterized by stable actor boundaries, other historical junctures, such as the end of the Cold War, exhibit far-reaching and swift transformations of actors' spatial and organizational existence. Moreover, because actors cannot be assumed to remain constant in the long run, analysis of macrohistorical processes virtually always requires “sociational” endogenization. This paper presents a series of computational models, implemented with the software package REPAST, which trace complex macrohistorical transformations of actors be they hierarchically organized as relational networks or as collections of symbolic categories. With respect to the former, dynamic networks featuring emergent compound actors with agent compartments represented in a spatial grid capture organizational domination of the territorial state. In addition, models of “tagged” social processes allows the analyst to show how democratic states predicate their behavior on categorical traits. Finally, categorical schemata that select out politically relevant cultural traits in ethnic landscapes formalize a constructivist notion of national identity in conformance with the qualitative literature on nationalism. This “finite-agent method”, representing both states and nations as higher-level structures superimposed on a lower-level grid of primitive agents or cultural traits, avoids reification of agency. Furthermore, it opens the door to explicit analysis of entity processes, such as the integration and disintegration of actors as well as boundary transformations. PMID:12011409

  4. Agent-Based Modeling and Simulation on Emergency Evacuation

    NASA Astrophysics Data System (ADS)

    Ren, Chuanjun; Yang, Chenghui; Jin, Shiyao

    Crowd stampedes and evacuation induced by panic caused by emergences often lead to fatalities as people are crushed, injured, trampled or even dead. Such phenomena may be triggered in life-threatening situations such as fires, explosions in crowded buildings. Emergency evacuation simulation has recently attracted the interest of a rapidly increasing number of scientists. This paper presents an Agent-Based Modeling and Simulation using Repast software to construct crowd evacuations for emergency response from an area under a fire. Various types of agents and different attributes of agents are designed in contrast to traditional modeling. The attributes that govern the characteristics of the people are studied and tested by iterative simulations. Simulations are also conducted to demonstrate the effect of various parameters of agents. Some interesting results were observed such as "faster is slower" and the ignorance of available exits. At last, simulation results suggest practical ways of minimizing the harmful consequences of such events and the existence of an optimal escape strategy.

  5. Voice pathology classification based on High-Speed Videoendoscopy.

    PubMed

    Panek, D; Skalski, A; Zielinski, T; Deliyski, D D

    2015-08-01

    This work presents a method for automatical and objective classification of patients with healthy and pathological vocal fold vibration impairments using High-Speed Videoendoscopy of the larynx. We used an image segmentation and extraction of a novel set of numerical parameters describing the spatio-temporal dynamics of vocal folds to classification according to the normal and pathological cases and achieved 73,3% cross-validation classification accuracy. This approach is promising to develop an automatic diagnosis tool of voice disorders. PMID:26736367

  6. A knowledge-based approach of satellite image classification for urban wetland detection

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofan

    It has been a technical challenge to accurately detect urban wetlands with remotely sensed data by means of pixel-based image classification. This is mainly caused by inadequate spatial resolutions of satellite imagery, spectral similarities between urban wetlands and adjacent land covers, and the spatial complexity of wetlands in human-transformed, heterogeneous urban landscapes. Knowledge-based classification, with great potential to overcome or reduce these technical impediments, has been applied to various image classifications focusing on urban land use/land cover and forest wetlands, but rarely to mapping the wetlands in urban landscapes. This study aims to improve the mapping accuracy of urban wetlands by integrating the pixel-based classification with the knowledge-based approach. The study area is the metropolitan area of Kansas City, USA. SPOT satellite images of 1992, 2008, and 2010 were classified into four classes - wetland, farmland, built-up land, and forestland - using the pixel-based supervised maximum likelihood classification method. The products of supervised classification are used as the comparative base maps. For our new classification approach, a knowledge base is developed to improve urban wetland detection, which includes a set of decision rules of identifying wetland cover in relation to its elevation, spatial adjacencies, habitat conditions, hydro-geomorphological characteristics, and relevant geostatistics. Using ERDAS Imagine software's knowledge classifier tool, the decision rules are applied to the base maps in order to identify wetlands that are not able to be detected based on the pixel-based classification. The results suggest that the knowledge-based image classification approach can enhance the urban wetland detection capabilities and classification accuracies with remotely sensed satellite imagery.

  7. Investigating biocomplexity through the agent-based paradigm

    PubMed Central

    Kaul, Himanshu

    2015-01-01

    Capturing the dynamism that pervades biological systems requires a computational approach that can accommodate both the continuous features of the system environment as well as the flexible and heterogeneous nature of component interactions. This presents a serious challenge for the more traditional mathematical approaches that assume component homogeneity to relate system observables using mathematical equations. While the homogeneity condition does not lead to loss of accuracy while simulating various continua, it fails to offer detailed solutions when applied to systems with dynamically interacting heterogeneous components. As the functionality and architecture of most biological systems is a product of multi-faceted individual interactions at the sub-system level, continuum models rarely offer much beyond qualitative similarity. Agent-based modelling is a class of algorithmic computational approaches that rely on interactions between Turing-complete finite-state machines—or agents—to simulate, from the bottom-up, macroscopic properties of a system. In recognizing the heterogeneity condition, they offer suitable ontologies to the system components being modelled, thereby succeeding where their continuum counterparts tend to struggle. Furthermore, being inherently hierarchical, they are quite amenable to coupling with other computational paradigms. The integration of any agent-based framework with continuum models is arguably the most elegant and precise way of representing biological systems. Although in its nascence, agent-based modelling has been utilized to model biological complexity across a broad range of biological scales (from cells to societies). In this article, we explore the reasons that make agent-based modelling the most precise approach to model biological systems that tend to be non-linear and complex. PMID:24227161

  8. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    NASA Astrophysics Data System (ADS)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.

  9. Investigating the feasibility of a BCI-driven robot-based writing agent for handicapped individuals

    NASA Astrophysics Data System (ADS)

    Syan, Chanan S.; Harnarinesingh, Randy E. S.; Beharry, Rishi

    2014-07-01

    Brain-Computer Interfaces (BCIs) predominantly employ output actuators such as virtual keyboards and wheelchair controllers to enable handicapped individuals to interact and communicate with their environment. However, BCI-based assistive technologies are limited in their application. There is minimal research geared towards granting disabled individuals the ability to communicate using written words. This is a drawback because involving a human attendant in writing tasks can entail a breach of personal privacy where the task entails sensitive and private information such as banking matters. BCI-driven robot-based writing however can provide a safeguard for user privacy where it is required. This study investigated the feasibility of a BCI-driven writing agent using the 3 degree-of- freedom Phantom Omnibot. A full alphanumerical English character set was developed and validated using a teach pendant program in MATLAB. The Omnibot was subsequently interfaced to a P300-based BCI. Three subjects utilised the BCI in the online context to communicate words to the writing robot over a Local Area Network (LAN). The average online letter-wise classification accuracy was 91.43%. The writing agent legibly constructed the communicated letters with minor errors in trajectory execution. The developed system therefore provided a feasible platform for BCI-based writing.

  10. Interactive agent based modeling of public health decision-making.

    PubMed

    Parks, Amanda L; Walker, Brett; Pettey, Warren; Benuzillo, Jose; Gesteland, Per; Grant, Juliana; Koopman, James; Drews, Frank; Samore, Matthew

    2009-01-01

    Agent-based models have yielded important insights regarding the transmission dynamics of communicable diseases. To better understand how these models can be used to study decision making of public health officials, we developed a computer program that linked an agent-based model of pertussis with an agent-based model of public health management. The program, which we call the Public Health Interactive Model & simulation (PHIMs) encompassed the reporting of cases to public health, case investigation, and public health response. The user directly interacted with the model in the role of the public health decision-maker. In this paper we describe the design of our model, and present the results of a pilot study to assess its usability and potential for future development. Affinity for specific tools was demonstrated. Participants ranked the program high in usability and considered it useful for training. Our ultimate goal is to achieve better public health decisions and outcomes through use of public health decision support tools. PMID:20351907

  11. Palm-Vein Classification Based on Principal Orientation Features

    PubMed Central

    Zhou, Yujia; Liu, Yaqin; Feng, Qianjin; Yang, Feng; Huang, Jing; Nie, Yixiao

    2014-01-01

    Personal recognition using palm–vein patterns has emerged as a promising alternative for human recognition because of its uniqueness, stability, live body identification, flexibility, and difficulty to cheat. With the expanding application of palm–vein pattern recognition, the corresponding growth of the database has resulted in a long response time. To shorten the response time of identification, this paper proposes a simple and useful classification for palm–vein identification based on principal direction features. In the registration process, the Gaussian-Radon transform is adopted to extract the orientation matrix and then compute the principal direction of a palm–vein image based on the orientation matrix. The database can be classified into six bins based on the value of the principal direction. In the identification process, the principal direction of the test sample is first extracted to ascertain the corresponding bin. One-by-one matching with the training samples is then performed in the bin. To improve recognition efficiency while maintaining better recognition accuracy, two neighborhood bins of the corresponding bin are continuously searched to identify the input palm–vein image. Evaluation experiments are conducted on three different databases, namely, PolyU, CASIA, and the database of this study. Experimental results show that the searching range of one test sample in PolyU, CASIA and our database by the proposed method for palm–vein identification can be reduced to 14.29%, 14.50%, and 14.28%, with retrieval accuracy of 96.67%, 96.00%, and 97.71%, respectively. With 10,000 training samples in the database, the execution time of the identification process by the traditional method is 18.56 s, while that by the proposed approach is 3.16 s. The experimental results confirm that the proposed approach is more efficient than the traditional method, especially for a large database. PMID:25383715

  12. Event-Based User Classification in Weibo Media

    PubMed Central

    Wang, Wendong; Cheng, Shiduan; Que, Xirong

    2014-01-01

    Weibo media, known as the real-time microblogging services, has attracted massive attention and support from social network users. Weibo platform offers an opportunity for people to access information and changes the way people acquire and disseminate information significantly. Meanwhile, it enables people to respond to the social events in a more convenient way. Much of the information in Weibo media is related to some events. Users who post different contents, and exert different behavior or attitude may lead to different contribution to the specific event. Therefore, classifying the large amount of uncategorized social circles generated in Weibo media automatically from the perspective of events has been a promising task. Under this circumstance, in order to effectively organize and manage the huge amounts of users, thereby further managing their contents, we address the task of user classification in a more granular, event-based approach in this paper. By analyzing real data collected from Sina Weibo, we investigate the Weibo properties and utilize both content information and social network information to classify the numerous users into four primary groups: celebrities, organizations/media accounts, grassroots stars, and ordinary individuals. The experiments results show that our method identifies the user categories accurately. PMID:25133235

  13. China's Classification-Based Forest Management: Procedures, Problems, and Prospects

    NASA Astrophysics Data System (ADS)

    Dai, Limin; Zhao, Fuqiang; Shao, Guofan; Zhou, Li; Tang, Lina

    2009-06-01

    China’s new Classification-Based Forest Management (CFM) is a two-class system, including Commodity Forest (CoF) and Ecological Welfare Forest (EWF) lands, so named according to differences in their distinct functions and services. The purposes of CFM are to improve forestry economic systems, strengthen resource management in a market economy, ease the conflicts between wood demands and public welfare, and meet the diversified needs for forest services in China. The formative process of China’s CFM has involved a series of trials and revisions. China’s central government accelerated the reform of CFM in the year 2000 and completed the final version in 2003. CFM was implemented at the provincial level with the aid of subsidies from the central government. About a quarter of the forestland in China was approved as National EWF lands by the State Forestry Administration in 2006 and 2007. Logging is prohibited on National EWF lands, and their landowners or managers receive subsidies of about 70 RMB (US10) per hectare from the central government. CFM represents a new forestry strategy in China and its implementation inevitably faces challenges in promoting the understanding of forest ecological services, generalizing nationwide criteria for identifying EWF and CoF lands, setting up forest-specific compensation mechanisms for ecological benefits, enhancing the knowledge of administrators and the general public about CFM, and sustaining EWF lands under China’s current forestland tenure system. CFM does, however, offer a viable pathway toward sustainable forest management in China.

  14. Event-based user classification in Weibo media.

    PubMed

    Guo, Liang; Wang, Wendong; Cheng, Shiduan; Que, Xirong

    2014-01-01

    Weibo media, known as the real-time microblogging services, has attracted massive attention and support from social network users. Weibo platform offers an opportunity for people to access information and changes the way people acquire and disseminate information significantly. Meanwhile, it enables people to respond to the social events in a more convenient way. Much of the information in Weibo media is related to some events. Users who post different contents, and exert different behavior or attitude may lead to different contribution to the specific event. Therefore, classifying the large amount of uncategorized social circles generated in Weibo media automatically from the perspective of events has been a promising task. Under this circumstance, in order to effectively organize and manage the huge amounts of users, thereby further managing their contents, we address the task of user classification in a more granular, event-based approach in this paper. By analyzing real data collected from Sina Weibo, we investigate the Weibo properties and utilize both content information and social network information to classify the numerous users into four primary groups: celebrities, organizations/media accounts, grassroots stars, and ordinary individuals. The experiments results show that our method identifies the user categories accurately. PMID:25133235

  15. Basic Hand Gestures Classification Based on Surface Electromyography.

    PubMed

    Palkowski, Aleksander; Redlarski, Grzegorz

    2016-01-01

    This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method. PMID:27298630

  16. Basic Hand Gestures Classification Based on Surface Electromyography

    PubMed Central

    Palkowski, Aleksander; Redlarski, Grzegorz

    2016-01-01

    This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method. PMID:27298630

  17. Graphene-based nanomaterials as molecular imaging agents.

    PubMed

    Garg, Bhaskar; Sung, Chu-Hsun; Ling, Yong-Chien

    2015-01-01

    Molecular imaging (MI) is a noninvasive, real-time visualization of biochemical events at the cellular and molecular level within tissues, living cells, and/or intact objects that can be advantageously applied in the areas of diagnostics, therapeutics, drug discovery, and development in understanding the nanoscale reactions including enzymatic conversions and protein-protein interactions. Consequently, over the years, great advancement has been made in the development of a variety of MI agents such as peptides, aptamers, antibodies, and various nanomaterials (NMs) including single-walled carbon nanotubes. Recently, graphene, a material popularized by Geim & Novoselov, has ignited considerable research efforts to rationally design and execute a wide range of graphene-based NMs making them an attractive platform for developing highly sensitive MI agents. Owing to their exceptional physicochemical and biological properties combined with desirable surface engineering, graphene-based NMs offer stable and tunable visible emission, small hydrodynamic size, low toxicity, and high biocompatibility and thus have been explored for in vitro and in vivo imaging applications as a promising alternative of traditional imaging agents. This review begins by describing the intrinsic properties of graphene and the key MI modalities. After which, we provide an overview on the recent advances in the design and development as well as physicochemical properties of the different classes of graphene-based NMs (graphene-dye conjugates, graphene-antibody conjugates, graphene-nanoparticle composites, and graphene quantum dots) being used as MI agents for potential applications including theranostics. Finally, the major challenges and future directions in the field will be discussed. PMID:25857851

  18. Remote sensing image classification based on support vector machine with the multi-scale segmentation

    NASA Astrophysics Data System (ADS)

    Bao, Wenxing; Feng, Wei; Ma, Ruishi

    2015-12-01

    In this paper, we proposed a new classification method based on support vector machine (SVM) combined with multi-scale segmentation. The proposed method obtains satisfactory segmentation results which are based on both the spectral characteristics and the shape parameters of segments. SVM method is used to label all these regions after multiscale segmentation. It can effectively improve the classification results. Firstly, the homogeneity of the object spectra, texture and shape are calculated from the input image. Secondly, multi-scale segmentation method is applied to the RS image. Combining graph theory based optimization with the multi-scale image segmentations, the resulting segments are merged regarding the heterogeneity criteria. Finally, based on the segmentation result, the model of SVM combined with spectrum texture classification is constructed and applied. The results show that the proposed method can effectively improve the remote sensing image classification accuracy and classification efficiency.

  19. Hierarchical structure for audio-video based semantic classification of sports video sequences

    NASA Astrophysics Data System (ADS)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  20. Domination and evolution in agent based model of an economy

    NASA Astrophysics Data System (ADS)

    Kazmi, Syed S.

    We introduce Agent Based Model of a pure exchange economy and a simple economy that includes production, consumption and distributions. Markets are described by Edgeworth Exchange in both models. Trades are binary bilateral trades at prices that are set in each trade. We found that the prices converge over time to a value that is not the standard Equilibrium value given by the Walrasian Tattonement fiction. The average price, and the distributions of Wealth, depends on the degree of Domination (persuasive power) we introduced based on differentials in trading "leverage" due to wealth differences. The full economy model is allowed to evolve by replacement of agents that do not survive with agents having random properties. We found that, depending upon the average productivity compared to the average consumption, very different kinds of behavior emerged. The Economy as a whole reaches a steady state by the population adapting to the conditions of productivity and consumption. Correlations develop in a population between what would be for each individual a random assignment of Productivity, Labor power, Wealth, and Preferences. The population adapts to the economic environment by development of these Correlations and without any learning process. We see signs of emerging social structure as a result of necessity of survival.

  1. An agent-based microsimulation of critical infrastructure systems

    SciTech Connect

    BARTON,DIANNE C.; STAMBER,KEVIN L.

    2000-03-29

    US infrastructures provide essential services that support the economic prosperity and quality of life. Today, the latest threat to these infrastructures is the increasing complexity and interconnectedness of the system. On balance, added connectivity will improve economic efficiency; however, increased coupling could also result in situations where a disturbance in an isolated infrastructure unexpectedly cascades across diverse infrastructures. An understanding of the behavior of complex systems can be critical to understanding and predicting infrastructure responses to unexpected perturbation. Sandia National Laboratories has developed an agent-based model of critical US infrastructures using time-dependent Monte Carlo methods and a genetic algorithm learning classifier system to control decision making. The model is currently under development and contains agents that represent the several areas within the interconnected infrastructures, including electric power and fuel supply. Previous work shows that agent-based simulations models have the potential to improve the accuracy of complex system forecasting and to provide new insights into the factors that are the primary drivers of emergent behaviors in interdependent systems. Simulation results can be examined both computationally and analytically, offering new ways of theorizing about the impact of perturbations to an infrastructure network.

  2. Agent-based modeling of urban land-use change

    NASA Astrophysics Data System (ADS)

    Li, Xinyan; Li, Deren

    2005-10-01

    ABM (Agent-Based Modeling) is a newly developed method of computer simulation. It has characteristics such as active, dynamic, and operational. Urban land-use change has been a focus problem all over the world, especially for the developing countries. We try to use ABM to model the urban land-use changes. By studying the mechanism of urban land use evolvement, we put forwards the thinking of modeling. And an urban land-use change model is built primarily based on the RePast software and GIS spatial database.

  3. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  4. Classification of PolSAR image based on quotient space theory

    NASA Astrophysics Data System (ADS)

    An, Zhihui; Yu, Jie; Liu, Xiaomeng; Liu, Limin; Jiao, Shuai; Zhu, Teng; Wang, Shaohua

    2015-12-01

    In order to improve the classification accuracy, quotient space theory was applied in the classification of polarimetric SAR (PolSAR) image. Firstly, Yamaguchi decomposition method is adopted, which can get the polarimetric characteristic of the image. At the same time, Gray level Co-occurrence Matrix (GLCM) and Gabor wavelet are used to get texture feature, respectively. Secondly, combined with texture feature and polarimetric characteristic, Support Vector Machine (SVM) classifier is used for initial classification to establish different granularity spaces. Finally, according to the quotient space granularity synthetic theory, we merge and reason the different quotient spaces to get the comprehensive classification result. Method proposed in this paper is tested with L-band AIRSAR of San Francisco bay. The result shows that the comprehensive classification result based on the theory of quotient space is superior to the classification result of single granularity space.

  5. Economic evaluations with agent-based modelling: an introduction.

    PubMed

    Chhatwal, Jagpreet; He, Tianhua

    2015-05-01

    Agent-based modelling (ABM) is a relatively new technique, which overcomes some of the limitations of other methods commonly used for economic evaluations. These limitations include linearity, homogeneity and stationarity. Agents in ABMs are autonomous entities, who interact with each other and with the environment. ABMs provide an inductive or 'bottom-up' approach, i.e. individual-level behaviours define system-level components. ABMs have a unique property to capture emergence phenomena that otherwise cannot be predicted by the combination of individual-level interactions. In this tutorial, we discuss the basic concepts and important features of ABMs. We present a case study of an application of a simple ABM to evaluate the cost effectiveness of screening of an infectious disease. We also provide our model, which was developed using an open-source software program, NetLogo. We discuss software, resources, challenges and future research opportunities of ABMs for economic evaluations. PMID:25609398

  6. Small Antimicrobial Agents Based on Acylated Reduced Amide Scaffold.

    PubMed

    Teng, Peng; Huo, Da; Nimmagadda, Alekhya; Wu, Jianfeng; She, Fengyu; Su, Ma; Lin, Xiaoyang; Yan, Jiyu; Cao, Annie; Xi, Chuanwu; Hu, Yong; Cai, Jianfeng

    2016-09-01

    Prevalence of drug-resistant bacteria has emerged to be one of the greatest threats in the 21st century. Herein, we report the development of a series of small molecular antibacterial agents that are based on the acylated reduced amide scaffold. These molecules display good potency against a panel of multidrug-resistant Gram-positive and Gram-negative bacterial strains. Meanwhile, they also effectively inhibit the biofilm formation. Mechanistic studies suggest that these compounds kill bacteria by compromising bacterial membranes, a mechanism analogous to that of host-defense peptides (HDPs). The mechanism is further supported by the fact that the lead compounds do not induce resistance in MRSA bacteria even after 14 passages. Lastly, we also demonstrate that these molecules have therapeutic potential by preventing inflammation caused by MRSA induced pneumonia in a rat model. This class of compounds could lead to an appealing class of antibiotic agents combating drug-resistant bacterial strains. PMID:27526720

  7. Hypercompetitive Environments: An Agent-based model approach

    NASA Astrophysics Data System (ADS)

    Dias, Manuel; Araújo, Tanya

    Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.

  8. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    NASA Astrophysics Data System (ADS)

    di Clemente, Riccardo; Pietronero, Luciano

    2012-07-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  9. Tissue-based standoff biosensors for detecting chemical warfare agents

    DOEpatents

    Greenbaum, Elias; Sanders, Charlene A.

    2003-11-18

    A tissue-based, deployable, standoff air quality sensor for detecting the presence of at least one chemical or biological warfare agent, includes: a cell containing entrapped photosynthetic tissue, the cell adapted for analyzing photosynthetic activity of the entrapped photosynthetic tissue; means for introducing an air sample into the cell and contacting the air sample with the entrapped photosynthetic tissue; a fluorometer in operable relationship with the cell for measuring photosynthetic activity of the entrapped photosynthetic tissue; and transmitting means for transmitting analytical data generated by the fluorometer relating to the presence of at least one chemical or biological warfare agent in the air sample, the sensor adapted for deployment into a selected area.

  10. Convergence and optimization of agent-based coalition formation

    NASA Astrophysics Data System (ADS)

    Wang, Yuanshi; Wu, Hong

    2005-03-01

    In this paper, we analyze the model of agent-based coalition formation in markets. Our goal is to study the convergence of the coalition formation and optimize agents’ strategies. We show that the model has a unique steady state (equilibrium) and prove that all solutions converge to it in the case that the maximum size of coalitions is not larger than three. The stability of the steady state in other cases is not studied while numerical simulations are given to show the convergence. The steady state, which determines both the global system gain and the average gain per agent, is expressed by the agents’ strategies in the coalition formation. Through the steady state, we give the relationship between the gains and the agents’ strategies, and present a series of results for the optimization of agents’ strategies.

  11. Reaction to Extreme Events in a Minimal Agent Based Model

    NASA Astrophysics Data System (ADS)

    Zaccaria, Andrea; Cristelli, Matthieu; Pietronero, Luciano

    We consider the issue of the overreaction of financial markets to a sudden price change. In particular, we focus on the price and the population dynamics which follows a large fluctuation. In order to investigate these aspects from different perspectives we discuss the known results for empirical data, the Lux-Marchesi model and a minimal agent based model which we have recently proposed. We show that, in this framework, the presence of a overreaction is deeply linked to the population dynamics. In particular, the presence of a destabilizing strategy in the market is a necessary condition to have an overshoot with respect to the exogenously induced price fluctuation. Finally, we analyze how the memory of the agents can quantitatively affect this behavior.

  12. Agent-Based Modeling of Noncommunicable Diseases: A Systematic Review

    PubMed Central

    Arah, Onyebuchi A.

    2015-01-01

    We reviewed the use of agent-based modeling (ABM), a systems science method, in understanding noncommunicable diseases (NCDs) and their public health risk factors. We systematically reviewed studies in PubMed, ScienceDirect, and Web of Sciences published from January 2003 to July 2014. We retrieved 22 relevant articles; each had an observational or interventional design. Physical activity and diet were the most-studied outcomes. Often, single agent types were modeled, and the environment was usually irrelevant to the studied outcome. Predictive validation and sensitivity analyses were most used to validate models. Although increasingly used to study NCDs, ABM remains underutilized and, where used, is suboptimally reported in public health studies. Its use in studying NCDs will benefit from clarified best practices and improved rigor to establish its usefulness and facilitate replication, interpretation, and application. PMID:25602871

  13. Perspective: A Dynamics-Based Classification of Ventricular Arrhythmias

    PubMed Central

    Weiss, James N.; Garfinkel, Alan; Karagueuzian, Hrayr S.; Nguyen, Thao P.; Olcese, Riccardo; Chen, Peng-Sheng; Qu, Zhilin

    2015-01-01

    Despite key advances in the clinical management of life-threatening ventricular arrhythmias, culminating with the development of implantable cardioverter-defibrillators and catheter ablation techniques, pharmacologic/biologic therapeutics have lagged behind. The fundamental issue is that biological targets are molecular factors. Diseases, however, represent emergent properties at the scale of the organism that result from dynamic interactions between multiple constantly changing molecular factors. For a pharmacologic/biologic therapy to be effective, it must target the dynamic processes that underlie the disease. Here we propose a classification of ventricular arrhythmias that is based on our current understanding of the dynamics occurring at the subcellular, cellular, tissue and organism scales, which cause arrhythmias by simultaneously generating arrhythmia triggers and exacerbating tissue vulnerability. The goal is to create a framework that systematically links these key dynamic factors together with fixed factors (structural and electrophysiological heterogeneity) synergistically promoting electrical dispersion and increased arrhythmia risk to molecular factors that can serve as biological targets. We classify ventricular arrhythmias into three primary dynamic categories related generally to unstable Ca cycling, reduced repolarization, and excess repolarization, respectively. The clinical syndromes, arrhythmia mechanisms, dynamic factors and what is known about their molecular counterparts are discussed. Based on this framework, we propose a computational-experimental strategy for exploring the links between molecular factors, fixed factors and dynamic factors that underlie life-threatening ventricular arrhythmias. The ultimate objective is to facilitate drug development by creating an in silico platform to evaluate and predict comprehensively how molecular interventions affect not only a single targeted arrhythmia, but all primary arrhythmia dynamics

  14. The evolving classification of soft tissue tumours: an update based on the new WHO classification.

    PubMed

    Fletcher, C D M

    2006-01-01

    Tumour classifications have become an integral part of modern oncology and, for pathologists, they provide guidelines which facilitate diagnostic and prognostic reproducibility. In many organ systems and most especially over the past decade or so, the World Health Organization (WHO) classifications have become pre-eminent, partly enabled by the timely publication of new "blue books" which now incorporate detailed text and copious illustrations. The new WHO classification of soft tissue tumours was introduced in late 2002 and, because it represents a broad consensus view, it has gained widespread acceptance. This review summarizes the changes, both major and minor, which were introduced and briefly describes the significant number of tumour types which have been first recognized or properly characterized during the past decade. Arguably the four most significant conceptual advances have been: (i) the formal recognition that morphologically benign lesions (such as cutaneous fibrous histiocytoma) may very rarely metastasize; (ii) the general acceptance that most pleomorphic sarcomas can be meaningfully subclassified and that so-called malignant fibrous histiocytoma is not a definable entity, but instead represents a wastebasket of undifferentiated pleomorphic sarcomas, accounting for no more than 5% of adult soft tissue sarcomas; (iii) the acknowledgement that most lesions formerly known as haemangiopericytoma show no evidence of pericytic differentiation and, instead, are fibroblastic in nature and form a morphological continuum with solitary fibrous tumour; and (iv) the increasing appreciation that not only do we not know from which cell type(s) most soft tissue tumours originate (histogenesis) but, for many, we do not recognize their line of differentiation or lineage--hence an increasing number of tumours are placed in the "uncertain differentiation" category. PMID:16359532

  15. Image-classification-based global dimming algorithm for LED backlights in LCDs

    NASA Astrophysics Data System (ADS)

    Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv

    2015-07-01

    Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.

  16. Mapping potential Blanding's turtle habitat using aerial orthophotographic imagery and object based classification

    NASA Astrophysics Data System (ADS)

    Barker, Rebecca

    Blanding's turtle (Emydoidea blandingii) is a threatened species in southern Quebec that is being inventoried to determine abundance and potential habitat by the Quebec Ministry of Natural Resources and Wildlife. In collaboration with that program and using spring leaf-off aerial orthophotos of Gatineau Park, attributes associated with known habitat criteria were analyzed: wetlands with open water, vegetation mounds for camouflage and thermoregulation, and logs for spring sun-basking. Pixel-based classification to separate wetlands from other land cover types was followed by object-based segmentation and rule-based classification of within--wetland vegetation and logs. Classifications integrated several image characteristics including texture, context, shape, area and spectral attributes. Field data and visual interpretation showed the accuracies of wetland and within wetland habitat feature classifications to be over 82.5%. The wetland classification results were used to develop a ranked potential habitat suitability map for Blanding's turtle that can be employed in conservation planning and management.

  17. Using Discrete Loss Functions and Weighted Kappa for Classification: An Illustration Based on Bayesian Network Analysis

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Lenaburg, Lubella

    2009-01-01

    In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…

  18. HYDROLOGIC REGIME CLASSIFICATION OF LAKE MICHIGAN COASTAL RIVERINE WETLANDS BASED ON WATERSHED CHARACTERISTICS

    EPA Science Inventory

    Classification of wetlands systems is needed not only to establish reference condition, but also to predict the relative sensitivity of different wetland classes. In the current study, we examined the potential for ecoregion- versus flow-based classification strategies to explain...

  19. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMED IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1)Lake Superior tributaries and 2) watersheds of riverine coastal wetlands ...

  20. FIELD TESTS OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED WATERSHED CLASSIFICATION SCHEMES IN THE GREAT LAKES BASIN

    EPA Science Inventory

    We compared classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme for two case studies involving 1) Lake Superior tributaries and 2) watersheds of riverine coastal wetlands...

  1. Renoprotection and the Bardoxolone Methyl Story - Is This the Right Way Forward? A Novel View of Renoprotection in CKD Trials: A New Classification Scheme for Renoprotective Agents.

    PubMed

    Onuigbo, Macaulay

    2013-01-01

    In the June 2011 issue of the New England Journal of Medicine, the BEAM (Bardoxolone Methyl Treatment: Renal Function in CKD/Type 2 Diabetes) trial investigators rekindled new interest and also some controversy regarding the concept of renoprotection and the role of renoprotective agents, when they reported significant increases in the mean estimated glomerular filtration rate (eGFR) in diabetic chronic kidney disease (CKD) patients with an eGFR of 20-45 ml/min/1.73 m(2) of body surface area at enrollment who received the trial drug bardoxolone methyl versus placebo. Unfortunately, subsequent phase IIIb trials failed to show that the drug is a safe alternative renoprotective agent. Current renoprotection paradigms depend wholly and entirely on angiotensin blockade; however, these agents [angiotensin converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs)] have proved to be imperfect renoprotective agents. In this review, we examine the mechanistic limitations of the various previous randomized controlled trials on CKD renoprotection, including the paucity of veritable, elaborate and systematic assessment methods for the documentation and reporting of individual patient-level, drug-related adverse events. We review the evidence base for the presence of putative, multiple independent and unrelated pathogenetic mechanisms that drive (diabetic and non-diabetic) CKD progression. Furthermore, we examine the validity, or lack thereof, of the hyped notion that the blockade of a single molecule (angiotensin II), which can only antagonize the angiotensin cascade, would veritably successfully, consistently and unfailingly deliver adequate and qualitative renoprotection results in (diabetic and non-diabetic) CKD patients. We clearly posit that there is this overarching impetus to arrive at the inference that multiple, disparately diverse and independent pathways, including any veritable combination of the mechanisms that we examine in this review, and many

  2. Renoprotection and the Bardoxolone Methyl Story – Is This the Right Way Forward? A Novel View of Renoprotection in CKD Trials: A New Classification Scheme for Renoprotective Agents

    PubMed Central

    Onuigbo, Macaulay

    2013-01-01

    In the June 2011 issue of the New England Journal of Medicine, the BEAM (Bardoxolone Methyl Treatment: Renal Function in CKD/Type 2 Diabetes) trial investigators rekindled new interest and also some controversy regarding the concept of renoprotection and the role of renoprotective agents, when they reported significant increases in the mean estimated glomerular filtration rate (eGFR) in diabetic chronic kidney disease (CKD) patients with an eGFR of 20-45 ml/min/1.73 m2 of body surface area at enrollment who received the trial drug bardoxolone methyl versus placebo. Unfortunately, subsequent phase IIIb trials failed to show that the drug is a safe alternative renoprotective agent. Current renoprotection paradigms depend wholly and entirely on angiotensin blockade; however, these agents [angiotensin converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs)] have proved to be imperfect renoprotective agents. In this review, we examine the mechanistic limitations of the various previous randomized controlled trials on CKD renoprotection, including the paucity of veritable, elaborate and systematic assessment methods for the documentation and reporting of individual patient-level, drug-related adverse events. We review the evidence base for the presence of putative, multiple independent and unrelated pathogenetic mechanisms that drive (diabetic and non-diabetic) CKD progression. Furthermore, we examine the validity, or lack thereof, of the hyped notion that the blockade of a single molecule (angiotensin II), which can only antagonize the angiotensin cascade, would veritably successfully, consistently and unfailingly deliver adequate and qualitative renoprotection results in (diabetic and non-diabetic) CKD patients. We clearly posit that there is this overarching impetus to arrive at the inference that multiple, disparately diverse and independent pathways, including any veritable combination of the mechanisms that we examine in this review, and many

  3. Improving Agent Based Models and Validation through Data Fusion

    PubMed Central

    Laskowski, Marek; Demianyk, Bryan C.P.; Friesen, Marcia R.; McLeod, Robert D.; Mukhi, Shamir N.

    2011-01-01

    This work is contextualized in research in modeling and simulation of infection spread within a community or population, with the objective to provide a public health and policy tool in assessing the dynamics of infection spread and the qualitative impacts of public health interventions. This work uses the integration of real data sources into an Agent Based Model (ABM) to simulate respiratory infection spread within a small municipality. Novelty is derived in that the data sources are not necessarily obvious within ABM infection spread models. The ABM is a spatial-temporal model inclusive of behavioral and interaction patterns between individual agents on a real topography. The agent behaviours (movements and interactions) are fed by census / demographic data, integrated with real data from a telecommunication service provider (cellular records) and person-person contact data obtained via a custom 3G Smartphone application that logs Bluetooth connectivity between devices. Each source provides data of varying type and granularity, thereby enhancing the robustness of the model. The work demonstrates opportunities in data mining and fusion that can be used by policy and decision makers. The data become real-world inputs into individual SIR disease spread models and variants, thereby building credible and non-intrusive models to qualitatively simulate and assess public health interventions at the population level. PMID:23569606

  4. Agent-based modelling of consumer energy choices

    NASA Astrophysics Data System (ADS)

    Rai, Varun; Henry, Adam Douglas

    2016-06-01

    Strategies to mitigate global climate change should be grounded in a rigorous understanding of energy systems, particularly the factors that drive energy demand. Agent-based modelling (ABM) is a powerful tool for representing the complexities of energy demand, such as social interactions and spatial constraints. Unlike other approaches for modelling energy demand, ABM is not limited to studying perfectly rational agents or to abstracting micro details into system-level equations. Instead, ABM provides the ability to represent behaviours of energy consumers -- such as individual households -- using a range of theories, and to examine how the interaction of heterogeneous agents at the micro-level produces macro outcomes of importance to the global climate, such as the adoption of low-carbon behaviours and technologies over space and time. We provide an overview of ABM work in the area of consumer energy choices, with a focus on identifying specific ways in which ABM can improve understanding of both fundamental scientific and applied aspects of the demand side of energy to aid the design of better policies and programmes. Future research needs for improving the practice of ABM to better understand energy demand are also discussed.

  5. Agent-based Transaction management for Mobile Multidatabase

    SciTech Connect

    Ongtang, Machigar; Hurson, Ali R.; Jiao, Yu; Potok, Thomas E

    2007-01-01

    The requirements to access and manipulate data across multiple heterogeneous existing databases and the proliferation of mobile technologies have propelled the development of mobile multidatabase system (MDBS). In that environment, transaction management is not a trivial task due to the technological constraints. Agent technology is an evolving research area, which has been applied to several application domains. This paper proposes an Agent-based Transaction Management for Mobile Multidatabase (AT3M) system. AT3M applies static and mobile agents to manage the transaction processing in mobile multidatabase system. It enables a fully distributed transaction management, accommodates mobility of the mobile clients, and allows global subtransactions to process in parallel. The proposed algorithm utilizes the hierarchical meta data structure of Summary Schema Model (SSM) which captures semantic information of data objects in the underlying local databases at different levels of abstractions. It is shown by simulation that AT3M suits well in mobile multidatabase environment and outperforms the existing V-Locking algorithm designed for the same environment in many aspects.

  6. Agent-based copyright protection architecture for online electronic publishing

    NASA Astrophysics Data System (ADS)

    Yi, Xun; Kitazawa, S.; Okamoto, Ejii; Wang, Xiao F.; Lam, KwokYan; Tu, S.

    1999-04-01

    Electronic publishing faces one major technical and economic challenge, i.e., how to prevent individuals from easily copying and illegally distributing electronic documents. Conventional cryptographic systems permit only valid key- holders access to encrypted data, but once such data is decrypted there is no way to track its reproduction or retransmission. Therefore, they provide little protection against data privacy, in which a publisher is confronted with unauthorized reproduction of information. In this paper, we explore the use of intelligent agent, digital watermark and cryptographic techniques to discourage the distribution of illegal electronic copies and propose an agent-based strategy to protect the copyright of on-line electronic publishing. In fact, it is impossible to develop an absolute secure copyright protection architecture for on-line electronic publishing which can prevent a malicious customer from spending a great deal of efforts on analyzing the software and finally obtaining the plaintext of the encrypted electronic document. Our work in this paper aims at making the value of analyzing agent and removing watermark to be much greater than that of the electronic document itself.

  7. SAR target classification based on multiscale sparse representation

    NASA Astrophysics Data System (ADS)

    Ruan, Huaiyu; Zhang, Rong; Li, Jingge; Zhan, Yibing

    2016-03-01

    We propose a novel multiscale sparse representation approach for SAR target classification. It firstly extracts the dense SIFT descriptors on multiple scales, then trains a global multiscale dictionary by sparse coding algorithm. After obtaining the sparse representation, the method applies spatial pyramid matching (SPM) and max pooling to summarize the features for each image. The proposed method can provide more information and descriptive ability than single-scale ones. Moreover, it costs less extra computation than existing multiscale methods which compute a dictionary for each scale. The MSTAR database and ship database collected from TerraSAR-X images are used in classification setup. Results show that the best overall classification rate of the proposed approach can achieve 98.83% on the MSTAR database and 92.67% on the TerraSAR-X ship database.

  8. A new approach to a maximum à posteriori-based kernel classification method.

    PubMed

    Nopriadi; Yamashita, Yukihiko

    2012-09-01

    This paper presents a new approach to a maximum a posteriori (MAP)-based classification, specifically, MAP-based kernel classification trained by linear programming (MAPLP). Unlike traditional MAP-based classifiers, MAPLP does not directly estimate a posterior probability for classification. Instead, it introduces a kernelized function to an objective function that behaves similarly to a MAP-based classifier. To evaluate the performance of MAPLP, a binary classification experiment was performed with 13 datasets. The results of this experiment are compared with those coming from conventional MAP-based kernel classifiers and also from other state-of-the-art classification methods. It shows that MAPLP performs promisingly against the other classification methods. It is argued that the proposed approach makes a significant contribution to MAP-based classification research; the approach widens the freedom to choose an objective function, it is not constrained to the strict sense Bayesian, and can be solved by linear programming. A substantial advantage of our proposed approach is that the objective function is undemanding, having only a single parameter. This simplicity, thus, allows for further research development in the future. PMID:22721808

  9. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  10. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  11. Topic Modelling for Object-Based Classification of Vhr Satellite Images Based on Multiscale Segmentations

    NASA Astrophysics Data System (ADS)

    Shen, Li; Wu, Linmei; Li, Zhipeng

    2016-06-01

    Multiscale segmentation is a key prerequisite step for object-based classification methods. However, it is often not possible to determine a sole optimal scale for the image to be classified because in many cases different geo-objects and even an identical geo-object may appear at different scales in one image. In this paper, an object-based classification method based on mutliscale segmentation results in the framework of topic modelling is proposed to classify VHR satellite images in an entirely unsupervised fashion. In the stage of topic modelling, grayscale histogram distributions for each geo-object class and each segment are learned in an unsupervised manner from multiscale segments. In the stage of classification, each segment is allocated a geo-object class label by the similarity comparison between the grayscale histogram distributions of each segment and each geo-object class. Experimental results show that the proposed method can perform better than the traditional methods based on topic modelling.

  12. Maximum-Margin Based Representation Learning from Multiple Atlases for Alzheimer’s Disease Classification

    PubMed Central

    Min, Rui; Cheng, Jian; Price, True; Wu, Guorong; Shen, Dinggang

    2015-01-01

    In order to establish the correspondences between different brains for comparison, spatial normalization based morphometric measurements have been widely used in the analysis of Alzheimer’s disease (AD). In the literature, different subjects are often compared in one atlas space, which may be insufficient in revealing complex brain changes. In this paper, instead of deploying one atlas for feature extraction and classification, we propose a maximum-margin based representation learning (MMRL) method to learn the optimal representation from multiple atlases. Unlike traditional methods that perform the representation learning separately from the classification, we propose to learn the new representation jointly with the classification model, which is more powerful in discriminating AD patients from normal controls (NC). We evaluated the proposed method on the ADNI database, and achieved 90.69% for AD/NC classification and 73.69% for p-MCI/s-MCI classification. PMID:25485381

  13. The method of narrow-band audio classification based on universal noise background model

    NASA Astrophysics Data System (ADS)

    Rui, Rui; Bao, Chang-chun

    2013-03-01

    Audio classification is the basis of content-based audio analysis and retrieval. The conventional classification methods mainly depend on feature extraction of audio clip, which certainly increase the time requirement for classification. An approach for classifying the narrow-band audio stream based on feature extraction of audio frame-level is presented in this paper. The audio signals are divided into speech, instrumental music, song with accompaniment and noise using the Gaussian mixture model (GMM). In order to satisfy the demand of actual environment changing, a universal noise background model (UNBM) for white noise, street noise, factory noise and car interior noise is built. In addition, three feature schemes are considered to optimize feature selection. The experimental results show that the proposed algorithm achieves a high accuracy for audio classification, especially under each noise background we used and keep the classification time less than one second.

  14. Agent-based model of macrophage action on endocrine pancreas.

    PubMed

    Martínez, Ignacio V; Gómez, Enrique J; Hernando, M Elena; Villares, Ricardo; Mellado, Mario

    2012-01-01

    This paper proposes an agent-based model of the action of macrophages on the beta cells of the endocrine pancreas. The aim of this model is to simulate the processes of beta cell proliferation and apoptosis and also the process of phagocytosis of cell debris by macrophages, all of which are related to the onset of the autoimmune response in type 1 diabetes. We have used data from the scientific literature to design the model. The results show that the model obtains good approximations to real processes and could be used to shed light on some open questions concerning such processes. PMID:23155767

  15. Ontology-based, multi-agent support of production management

    NASA Astrophysics Data System (ADS)

    Meridou, Despina T.; Inden, Udo; Rückemann, Claus-Peter; Patrikakis, Charalampos Z.; Kaklamani, Dimitra-Theodora I.; Venieris, Iakovos S.

    2016-06-01

    Over the recent years, the reported incidents on failed aircraft ramp-ups or the delayed production in small-lots have increased substantially. In this paper, we present a production management platform that combines agent-based techniques with the Service Oriented Architecture paradigm. This platform takes advantage of the functionality offered by the semantic web language OWL, which allows the users and services of the platform to speak a common language and, at the same time, facilitates risk management and decision making.

  16. Nanocellulose-based composites and bioactive agents for food packaging.

    PubMed

    Khan, Avik; Huq, Tanzina; Khan, Ruhul A; Riedl, Bernard; Lacroix, Monique

    2014-01-01

    Global environmental concern, regarding the use of petroleum-based packaging materials, is encouraging researchers and industries in the search for packaging materials from natural biopolymers. Bioactive packaging is gaining more and more interest not only due to its environment friendly nature but also due to its potential to improve food quality and safety during packaging. Some of the shortcomings of biopolymers, such as weak mechanical and barrier properties can be significantly enhanced by the use of nanomaterials such as nanocellulose (NC). The use of NC can extend the food shelf life and can also improve the food quality as they can serve as carriers of some active substances, such as antioxidants and antimicrobials. The NC fiber-based composites have great potential in the preparation of cheap, lightweight, and very strong nanocomposites for food packaging. This review highlights the potential use and application of NC fiber-based nanocomposites and also the incorporation of bioactive agents in food packaging. PMID:24188266

  17. Analysis of uncertainty in multi-temporal object-based classification

    NASA Astrophysics Data System (ADS)

    Löw, Fabian; Knöfel, Patrick; Conrad, Christopher

    2015-07-01

    Agricultural management increasingly uses crop maps based on classification of remotely sensed data. However, classification errors can translate to errors in model outputs, for instance agricultural production monitoring (yield, water demand) or crop acreage calculation. Hence, knowledge on the spatial variability of the classier performance is important information for the user. But this is not provided by traditional assessments of accuracy, which are based on the confusion matrix. In this study, classification uncertainty was analyzed, based on the support vector machines (SVM) algorithm. SVM was applied to multi-spectral time series data of RapidEye from different agricultural landscapes and years. Entropy was calculated as a measure of classification uncertainty, based on the per-object class membership estimations from the SVM algorithm. Permuting all possible combinations of available images allowed investigating the impact of the image acquisition frequency and timing, respectively, on the classification uncertainty. Results show that multi-temporal datasets decrease classification uncertainty for different crops compared to single data sets, but there was no "one-image-combination-fits-all" solution. The number and acquisition timing of the images, for which a decrease in uncertainty could be realized, proved to be specific to a given landscape, and for each crop they differed across different landscapes. For some crops, an increase of uncertainty was observed when increasing the quantity of images, even if classification accuracy was improved. Random forest regression was employed to investigate the impact of different explanatory variables on the observed spatial pattern of classification uncertainty. It was strongly influenced by factors related with the agricultural management and training sample density. Lower uncertainties were revealed for fields close to rivers or irrigation canals. This study demonstrates that classification uncertainty estimates

  18. Hydrologic-Process-Based Soil Texture Classifications for Improved Visualization of Landscape Function

    PubMed Central

    Groenendyk, Derek G.; Ferré, Ty P.A.; Thorp, Kelly R.; Rice, Amy K.

    2015-01-01

    Soils lie at the interface between the atmosphere and the subsurface and are a key component that control ecosystem services, food production, and many other processes at the Earth’s surface. There is a long-established convention for identifying and mapping soils by texture. These readily available, georeferenced soil maps and databases are used widely in environmental sciences. Here, we show that these traditional soil classifications can be inappropriate, contributing to bias and uncertainty in applications from slope stability to water resource management. We suggest a new approach to soil classification, with a detailed example from the science of hydrology. Hydrologic simulations based on common meteorological conditions were performed using HYDRUS-1D, spanning textures identified by the United States Department of Agriculture soil texture triangle. We consider these common conditions to be: drainage from saturation, infiltration onto a drained soil, and combined infiltration and drainage events. Using a k-means clustering algorithm, we created soil classifications based on the modeled hydrologic responses of these soils. The hydrologic-process-based classifications were compared to those based on soil texture and a single hydraulic property, Ks. Differences in classifications based on hydrologic response versus soil texture demonstrate that traditional soil texture classification is a poor predictor of hydrologic response. We then developed a QGIS plugin to construct soil maps combining a classification with georeferenced soil data from the Natural Resource Conservation Service. The spatial patterns of hydrologic response were more immediately informative, much simpler, and less ambiguous, for use in applications ranging from trafficability to irrigation management to flood control. The ease with which hydrologic-process-based classifications can be made, along with the improved quantitative predictions of soil responses and visualization of landscape

  19. Hydrologic-Process-Based Soil Texture Classifications for Improved Visualization of Landscape Function.

    PubMed

    Groenendyk, Derek G; Ferré, Ty P A; Thorp, Kelly R; Rice, Amy K

    2015-01-01

    Soils lie at the interface between the atmosphere and the subsurface and are a key component that control ecosystem services, food production, and many other processes at the Earth's surface. There is a long-established convention for identifying and mapping soils by texture. These readily available, georeferenced soil maps and databases are used widely in environmental sciences. Here, we show that these traditional soil classifications can be inappropriate, contributing to bias and uncertainty in applications from slope stability to water resource management. We suggest a new approach to soil classification, with a detailed example from the science of hydrology. Hydrologic simulations based on common meteorological conditions were performed using HYDRUS-1D, spanning textures identified by the United States Department of Agriculture soil texture triangle. We consider these common conditions to be: drainage from saturation, infiltration onto a drained soil, and combined infiltration and drainage events. Using a k-means clustering algorithm, we created soil classifications based on the modeled hydrologic responses of these soils. The hydrologic-process-based classifications were compared to those based on soil texture and a single hydraulic property, Ks. Differences in classifications based on hydrologic response versus soil texture demonstrate that traditional soil texture classification is a poor predictor of hydrologic response. We then developed a QGIS plugin to construct soil maps combining a classification with georeferenced soil data from the Natural Resource Conservation Service. The spatial patterns of hydrologic response were more immediately informative, much simpler, and less ambiguous, for use in applications ranging from trafficability to irrigation management to flood control. The ease with which hydrologic-process-based classifications can be made, along with the improved quantitative predictions of soil responses and visualization of landscape

  20. Skin injury model classification based on shape vector analysis

    PubMed Central

    2012-01-01

    Background: Skin injuries can be crucial in judicial decision making. Forensic experts base their classification on subjective opinions. This study investigates whether known classes of simulated skin injuries are correctly classified statistically based on 3D surface models and derived numerical shape descriptors. Methods: Skin injury surface characteristics are simulated with plasticine. Six injury classes – abrasions, incised wounds, gunshot entry wounds, smooth and textured strangulation marks as well as patterned injuries - with 18 instances each are used for a k-fold cross validation with six partitions. Deformed plasticine models are captured with a 3D surface scanner. Mean curvature is estimated for each polygon surface vertex. Subsequently, distance distributions and derived aspect ratios, convex hulls, concentric spheres, hyperbolic points and Fourier transforms are used to generate 1284-dimensional shape vectors. Subsequent descriptor reduction maximizing SNR (signal-to-noise ratio) result in an average of 41 descriptors (varying across k-folds). With non-normal multivariate distribution of heteroskedastic data, requirements for LDA (linear discriminant analysis) are not met. Thus, shrinkage parameters of RDA (regularized discriminant analysis) are optimized yielding a best performance with λ = 0.99 and γ = 0.001. Results: Receiver Operating Characteristic of a descriptive RDA yields an ideal Area Under the Curve of 1.0for all six categories. Predictive RDA results in an average CRR (correct recognition rate) of 97,22% under a 6 partition k-fold. Adding uniform noise within the range of one standard deviation degrades the average CRR to 71,3%. Conclusions: Digitized 3D surface shape data can be used to automatically classify idealized shape models of simulated skin injuries. Deriving some well established descriptors such as histograms, saddle shape of hyperbolic points or convex hulls with subsequent reduction of dimensionality while maximizing SNR

  1. Stromal-Based Signatures for the Classification of Gastric Cancer.

    PubMed

    Uhlik, Mark T; Liu, Jiangang; Falcon, Beverly L; Iyer, Seema; Stewart, Julie; Celikkaya, Hilal; O'Mahony, Marguerita; Sevinsky, Christopher; Lowes, Christina; Douglass, Larry; Jeffries, Cynthia; Bodenmiller, Diane; Chintharlapalli, Sudhakar; Fischl, Anthony; Gerald, Damien; Xue, Qi; Lee, Jee-Yun; Santamaria-Pang, Alberto; Al-Kofahi, Yousef; Sui, Yunxia; Desai, Keyur; Doman, Thompson; Aggarwal, Amit; Carter, Julia H; Pytowski, Bronislaw; Jaminet, Shou-Ching; Ginty, Fiona; Nasir, Aejaz; Nagy, Janice A; Dvorak, Harold F; Benjamin, Laura E

    2016-05-01

    Treatment of metastatic gastric cancer typically involves chemotherapy and monoclonal antibodies targeting HER2 (ERBB2) and VEGFR2 (KDR). However, reliable methods to identify patients who would benefit most from a combination of treatment modalities targeting the tumor stroma, including new immunotherapy approaches, are still lacking. Therefore, we integrated a mouse model of stromal activation and gastric cancer genomic information to identify gene expression signatures that may inform treatment strategies. We generated a mouse model in which VEGF-A is expressed via adenovirus, enabling a stromal response marked by immune infiltration and angiogenesis at the injection site, and identified distinct stromal gene expression signatures. With these data, we designed multiplexed IHC assays that were applied to human primary gastric tumors and classified each tumor to a dominant stromal phenotype representative of the vascular and immune diversity found in gastric cancer. We also refined the stromal gene signatures and explored their relation to the dominant patient phenotypes identified by recent large-scale studies of gastric cancer genomics (The Cancer Genome Atlas and Asian Cancer Research Group), revealing four distinct stromal phenotypes. Collectively, these findings suggest that a genomics-based systems approach focused on the tumor stroma can be used to discover putative predictive biomarkers of treatment response, especially to antiangiogenesis agents and immunotherapy, thus offering an opportunity to improve patient stratification. Cancer Res; 76(9); 2573-86. ©2016 AACR. PMID:27197264

  2. Field-Based Land Cover Classification Aided with Texture Analyses Using Terrasar-X Data

    NASA Astrophysics Data System (ADS)

    Mahmoud, Ali; Pradhan, Biswajeet; Buchroithner, Manfred

    The present study aims to evaluate the field-based approach for the classification of land cover using the recently launched high resolution SAR data. TerraSAR-X1 (TSX-1) strip mode im-age, coupled with Digital Ortho Photos with 20 cm spatial resolution was used for land cover classification and parcel mapping respectively. Different filtering and texture analyses tech-niques were applied to extract textural information from the TSX-1 image in order to assess the enhancement of the classification accuracy. Several attributes of parcels were derived from the available TSX-1 image in order to define the most suitable attributes discriminating be-tween different land cover types. Then, these attributes were further analyzed by statistical and various image classification methods for landcover classification. The results showed that, tex-tural analysis performed higher classification accuracy than the earlier. The authors conclude that, an integrated landcover classification using the textural information in TerraSAR-X1 has high potential for landcover mapping. Key words: Landcover classification, TerraSARX1, field based, texture analysis

  3. Optimal query-based relevance feedback in medical image retrieval using score fusion-based classification.

    PubMed

    Behnam, Mohammad; Pourghassem, Hossein

    2015-04-01

    In this paper, a new content-based medical image retrieval (CBMIR) framework using an effective classification method and a novel relevance feedback (RF) approach are proposed. For a large-scale database with diverse collection of different modalities, query image classification is inevitable due to firstly, reducing the computational complexity and secondly, increasing influence of data fusion by removing unimportant data and focus on the more valuable information. Hence, we find probability distribution of classes in the database using Gaussian mixture model (GMM) for each feature descriptor and then using the fusion of obtained scores from the dependency probabilities, the most relevant clusters are identified for a given query. Afterwards, visual similarity of query image and images in relevant clusters are calculated. This method is performed separately on all feature descriptors, and then the results are fused together using feature similarity ranking level fusion algorithm. In the RF level, we propose a new approach to find the optimal queries based on relevant images. The main idea is based on density function estimation of positive images and strategy of moving toward the aggregation of estimated density function. The proposed framework has been evaluated on ImageCLEF 2005 database consisting of 10,000 medical X-ray images of 57 semantic classes. The experimental results show that compared with the existing CBMIR systems, our framework obtains the acceptable performance both in the image classification and in the image retrieval by RF. PMID:25246167

  4. [Pharmacological agents and transport nanosystems based on plant phospholipids].

    PubMed

    Medvedeva, N V; Prosorovskiy, V N; Ignatov, D V; Druzilovskaya, O S; Kudinov, V A; Kasatkina, E O; Tikhonova, E G; Ipatova, O M

    2015-01-01

    A new generation of plant phosphatidylcholine (PC)-based pharmacological agents has been developed under academician A.I. Archakov leadership at the Institute of Biomedical Chemistry (IBMC). For their production a unique technology allowing to obtain dry lyophilized phospholipid nanoparticles of 30 nm was elaborated. The successful practical application of PC nanoparticles as a drug agent may be illustrated by Phosphogliv (oral and injection formulations). Being developed at IBMC for the treatment of liver diseases, including viral hepatitis, Phosphogliv (currently marketed by the "Pharmstandard" company) is approved for clinical application in 2000, and is widely used in medical practice. Based on the developed and scaled in IBMC technology of prerparation of ultra small size phospholipid nanoparticles without the use of detergents/surfactants and stabilizers another drug preparation, Phospholipovit, exhibiting pronounced hypolipidemic properties has been obtained. Recently completed preclinical studies have shown that PC nanoparticles of 20-30 nm activate reverse cholesterol transport (RCT) and in this context it is more active than well known foreign preparation Essentiale. Phospholipovit is now at the stage of clinical trials (phase 1 completed). PC was also used as a basis for the development of a transport nanosystem with a particles size of 20-25 nm in diameter and incorporation of various drug substances from various therapeutic groups. Using several drugs substances as an example, increased bioavailability and specific activity were demonstrated for the formulations equipped with such transport nanosystem. Formulations equipped with the transport nanosystems have been developed for such pharmacological agents as doxorubicin, rifampin, budesonide, chlorin E6, prednisone, and others. PMID:25978388

  5. A novel alignment repulsion algorithm for flocking of multi-agent systems based on the number of neighbours per agent

    NASA Astrophysics Data System (ADS)

    Kahani, R.; Sedigh, A. K.; Mahjani, M. Gh.

    2015-12-01

    In this paper, an energy-based control methodology is proposed to satisfy the Reynolds three rules in a flock of multiple agents. First, a control law is provided that is directly derived from the passivity theorem. In the next step, the Number of Neighbours Alignment/Repulsion algorithm is introduced for a flock of agents which loses the cohesion ability and uniformly joint connectivity condition. With this method, each agent tries to follow the agents which escape its neighbourhood by considering the velocity of escape time and number of neighbours. It is mathematically proved that the motion of multiple agents converges to a rigid and uncrowded flock if the group is jointly connected just for an instant. Moreover, the conditions for collision avoidance are guaranteed during the entire process. Finally, simulation results are presented to show the effectiveness of the proposed methodology.

  6. Classification of idiopathic toe walking based on gait analysis: development and application of the ITW severity classification.

    PubMed

    Alvarez, Christine; De Vera, Mary; Beauchamp, Richard; Ward, Valerie; Black, Alec

    2007-09-01

    Idiopathic toe walking (ITW), considered abnormal after the age of 3 years, is a common complaint seen by medical professionals, especially orthopaedic surgeons and physiotherapists. A classification for idiopathic toe walking would be helpful to better understand the condition, delineate true idiopathic toe walkers from patients with other conditions, and allow for assignment of a severity gradation, thereby directing management of ITW. The purpose of this study was to describe idiopathic toe walking and develop a toe walking classification scheme in a large sample of children. Three primary criteria, presence of a first ankle rocker, presence of an early third ankle rocker, and predominant early ankle moment, were used to classify idiopathic toe walking into three severity groups: Type 1 mild; Type 2 moderate; and Type 3 severe. Supporting data, based on ankle range of motion, sagittal joint powers, knee kinematics, and EMG data were also analyzed. Prospectively collected gait analysis data of 133 children (266 feet) with idiopathic toe walking were analyzed. Subjects' age range was from 4.19 to 15.96 years with a mean age of 8.80 years. Pooling right and left foot data, 40 feet were classified as Type 1, 129 were classified as Type 2, and 90 were classified as Type 3. Seven feet were unclassifiable. Statistical analysis of continuous variables comprising the primary criteria showed that the toe walking severity classification was able to differentiate between three levels of toe walking severity. This classification allowed for the quantitative description of the idiopathic toe walking pattern as well as the delineation of three distinct types of ITW patients (mild, moderate, and severe). PMID:17161602

  7. An agent-based approach to financial stylized facts

    NASA Astrophysics Data System (ADS)

    Shimokawa, Tetsuya; Suzuki, Kyoko; Misawa, Tadanobu

    2007-06-01

    An important challenge of the financial theory in recent years is to construct more sophisticated models which have consistencies with as many financial stylized facts that cannot be explained by traditional models. Recently, psychological studies on decision making under uncertainty which originate in Kahneman and Tversky's research attract a lot of interest as key factors which figure out the financial stylized facts. These psychological results have been applied to the theory of investor's decision making and financial equilibrium modeling. This paper, following these behavioral financial studies, would like to propose an agent-based equilibrium model with prospect theoretical features of investors. Our goal is to point out a possibility that loss-averse feature of investors explains vast number of financial stylized facts and plays a crucial role in price formations of financial markets. Price process which is endogenously generated through our model has consistencies with, not only the equity premium puzzle and the volatility puzzle, but great kurtosis, asymmetry of return distribution, auto-correlation of return volatility, cross-correlation between return volatility and trading volume. Moreover, by using agent-based simulations, the paper also provides a rigorous explanation from the viewpoint of a lack of market liquidity to the size effect, which means that small-sized stocks enjoy excess returns compared to large-sized stocks.

  8. Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis

    PubMed Central

    2016-01-01

    Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402

  9. From Compartmentalized to Agent-based Models of Epidemics

    NASA Astrophysics Data System (ADS)

    Macal, Charles

    Supporting decisions in the throes of an impending epidemic poses distinct technical challenges arising from the uncertainties in modeling disease propagation processes and the need for producing timely answers to policy questions. Compartmental models, because of their relative simplicity, produce timely information, but often do not include the level of fidelity of the information needed to answer specific policy questions. Highly granular agent-based simulations produce an extensive amount of information on all aspects of a simulated epidemic, yet complex models often cannot produce this information in a timely manner. We propose a two-phased approach to addressing the tradeoff between model complexity and the speed at which models can be used to answer to questions about an impending outbreak. In the first phase, in advance of an epidemic, ensembles of highly granular agent-based simulations are run over the entire parameter space, characterizing the space of possible model outcomes and uncertainties. Meta-models are derived that characterize model outcomes as dependent on uncertainties in disease parameters, data, and structural relationships. In the second phase, envisioned as during an epidemic, the meta-model is run in combination with compartmental models, which can be run very quickly. Model outcomes are compared as a basis for establishing uncertainties in model forecasts. This work is supported by the U.S. Department of Energy under Contract number DE-AC02-06CH11357 and National Science Foundation (NSF) RAPID Award DEB-1516428.

  10. Router Agent Technology for Policy-Based Network Management

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Sudhir, Gurusham; Chang, Hsin-Ping; James, Mark; Liu, Yih-Chiao J.; Chiang, Winston

    2011-01-01

    This innovation can be run as a standalone network application on any computer in a networked environment. This design can be configured to control one or more routers (one instance per router), and can also be configured to listen to a policy server over the network to receive new policies based on the policy- based network management technology. The Router Agent Technology transforms the received policies into suitable Access Control List syntax for the routers it is configured to control. It commits the newly generated access control lists to the routers and provides feedback regarding any errors that were faced. The innovation also automatically generates a time-stamped log file regarding all updates to the router it is configured to control. This technology, once installed on a local network computer and started, is autonomous because it has the capability to keep listening to new policies from the policy server, transforming those policies to router-compliant access lists, and committing those access lists to a specified interface on the specified router on the network with any error feedback regarding commitment process. The stand-alone application is named RouterAgent and is currently realized as a fully functional (version 1) implementation for the Windows operating system and for CISCO routers.

  11. Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.

    PubMed

    Kurhekar, Manish; Deshpande, Umesh

    2016-01-01

    Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402

  12. Classification of weld defect based on information fusion technology for radiographic testing system

    NASA Astrophysics Data System (ADS)

    Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying

    2016-03-01

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.

  13. Classification of weld defect based on information fusion technology for radiographic testing system.

    PubMed

    Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying

    2016-03-01

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification. PMID:27036822

  14. Drug related webpages classification using images and text information based on multi-kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan

    2015-12-01

    In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.

  15. Agent-Based Learning Environments as a Research Tool for Investigating Teaching and Learning.

    ERIC Educational Resources Information Center

    Baylor, Amy L.

    2002-01-01

    Discusses intelligent learning environments for computer-based learning, such as agent-based learning environments, and their advantages over human-based instruction. Considers the effects of multiple agents; agents and research design; the use of Multiple Intelligent Mentors Instructing Collaboratively (MIMIC) for instructional design for…

  16. Distributed-knowledge-based spectral processing and classification system for instruction and learning

    NASA Astrophysics Data System (ADS)

    Siddiqui, Khalid J.

    1999-12-01

    This paper develops a distributed knowledge-based spectral processing and classification system which functions in one of two modes, executive and assistant. In the executive mode the system functions as a stand-alone system, automatically performing all the tasks from spectral enhancement, feature extraction and selection, to spectral classification and interpretation using the optimally feasible algorithms. In the assistant mode the system leads the user through the entire spectral processing and classification process, allowing a user to select appropriate parameters, their weights, knowledge organization method and a classification algorithm. Thus, the latter mode can also be used for teaching and instruction. It is shown how novice users can select a set of parameters, adjust their weights, and examine the classification process. Since different classifiers have various underlying assumptions, provisions have been made to control these assumptions, allowing users to select the parameters individually and combined, and providing facilities to visualize the interrelationships among the parameters.

  17. Using Agent Based Modeling (ABM) to Develop Cultural Interaction Simulations

    NASA Technical Reports Server (NTRS)

    Drucker, Nick; Jones, Phillip N.

    2012-01-01

    Today, most cultural training is based on or built around "cultural engagements" or discrete interactions between the individual learner and one or more cultural "others". Often, success in the engagement is the end or the objective. In reality, these interactions usually involve secondary and tertiary effects with potentially wide ranging consequences. The concern is that learning culture within a strict engagement context might lead to "checklist" cultural thinking that will not empower learners to understand the full consequence of their actions. We propose the use of agent based modeling (ABM) to collect, store, and, simulating the effects of social networks, promulgate engagement effects over time, distance, and consequence. The ABM development allows for rapid modification to re-create any number of population types, extending the applicability of the model to any requirement for social modeling.

  18. Customer Credit Scoring Method Based on the SVDD Classification Model with Imbalanced Dataset

    NASA Astrophysics Data System (ADS)

    Tian, Bo; Nan, Lin; Zheng, Qin; Yang, Lei

    Customer credit scoring is a typical class of pattern classification problem with imbalanced dataset. A new customer credit scoring method based on the support vector domain description (SVDD) classification model was proposed in this paper. Main techniques of customer credit scoring were reviewed. The SVDD model with imbalanced dataset was analyzed and the predication method of customer credit scoring based on the SVDD model was proposed. Our experimental results confirm that our approach is effective in ranking and classifying customer credit.

  19. A new texture and shape based technique for improving meningioma classification.

    PubMed

    Fatima, Kiran; Arooj, Arshia; Majeed, Hammad

    2014-11-01

    Over the past decade, computer-aided diagnosis is rapidly growing due to the availability of patient data, sophisticated image acquisition tools and advancement in image processing and machine learning algorithms. Meningiomas are the tumors of brain and spinal cord. They account for 20% of all the brain tumors. Meningioma subtype classification involves the classification of benign meningioma into four major subtypes: meningothelial, fibroblastic, transitional, and psammomatous. Under the microscope, the histology images of these four subtypes show a variety of textural and structural characteristics. High intraclass and low interclass variabilities in meningioma subtypes make it an extremely complex classification problem. A number of techniques have been proposed for meningioma subtype classification with varying performances on different subtypes. Most of these techniques employed wavelet packet transforms for textural features extraction and analysis of meningioma histology images. In this article, a hybrid classification technique based on texture and shape characteristics is proposed for the classification of meningioma subtypes. Meningothelial and fibroblastic subtypes are classified on the basis of nuclei shapes while grey-level co-occurrence matrix textural features are used to train a multilayer perceptron for the classification of transitional and psammomatous subtypes. On the whole, average classification accuracy of 92.50% is achieved through the proposed hybrid classifier; which to the best of our knowledge is the highest. PMID:25060536

  20. Classification of normal and diseased liver shapes based on Spherical Harmonics coefficients.

    PubMed

    Mofrad, Farshid Babapour; Zoroofi, Reza Aghaeizadeh; Tehrani-Fard, Ali Abbaspour; Akhlaghpoor, Shahram; Sato, Yoshinobu

    2014-05-01

    Liver-shape analysis and quantification is still an open research subject. Quantitative assessment of the liver is of clinical importance in various procedures such as diagnosis, treatment planning, and monitoring. Liver-shape classification is of clinical importance for corresponding intra-subject and inter-subject studies. In this research, we propose a novel technique for the liver-shape classification based on Spherical Harmonics (SH) coefficients. The proposed liver-shape classification algorithm consists of the following steps: (a) Preprocessing, including mesh generation and simplification, point-set matching, and surface to template alignment; (b) Liver-shape parameterization, including surface normalization, SH expansion followed by parameter space registration; (c) Feature selection and classification, including frequency based feature selection, feature space reduction by Principal Component Analysis (PCA), and classification. The above multi-step approach is novel in the sense that registration and feature selection for liver-shape classification is proposed and implemented and validated for the normal and diseases liver in the SH domain. Various groups of SH features after applying conventional PCA and/or ordered by p-value PCA are employed in two classifiers including Support Vector Machine (SVM) and k-Nearest Neighbor (k-NN) in the presence of 101 liver data sets. Results show that the proposed specific features combined with classifiers outperform existing liver-shape classification techniques that employ liver surface information in the spatial domain. In the available data sets, the proposed method can successful classify normal and diseased livers with a correct classification rate of above 90 %. The performed result in average is higher than conventional liver-shape classification method. Several standard metrics such as Leave-one-out cross-validation and Receiver Operating Characteristic (ROC) analysis are employed in the experiments and

  1. Cell-based therapy technology classifications and translational challenges

    PubMed Central

    Mount, Natalie M.; Ward, Stephen J.; Kefalas, Panos; Hyllner, Johan

    2015-01-01

    Cell therapies offer the promise of treating and altering the course of diseases which cannot be addressed adequately by existing pharmaceuticals. Cell therapies are a diverse group across cell types and therapeutic indications and have been an active area of research for many years but are now strongly emerging through translation and towards successful commercial development and patient access. In this article, we present a description of a classification of cell therapies on the basis of their underlying technologies rather than the more commonly used classification by cell type because the regulatory path and manufacturing solutions are often similar within a technology area due to the nature of the methods used. We analyse the progress of new cell therapies towards clinical translation, examine how they are addressing the clinical, regulatory, manufacturing and reimbursement requirements, describe some of the remaining challenges and provide perspectives on how the field may progress for the future. PMID:26416686

  2. Studies of Template-based Photometric Classification of Supernovae

    NASA Astrophysics Data System (ADS)

    Asimacopoulos, Leia; Londo, Stephen; Macaluso, Joseph; Cunningham, John; Kuhlmann, Steve; Kovacs, Eve

    2016-01-01

    We study photometric classification of Type Ia (SNIa) and core collapse (SNcc) supernovae using a combination of simulated data from DES and real data from SDSS. We increase the number of core collapse templates from the eight commonly used to type SDSS supernovae (PSNID) to forty-five currently available in SNANA. These are implemented in the SNCosmo analysis package. Our goal is to study the accuracy in identifying all types of supernovae as a function of numbers and types of templates.

  3. Agent-Based Mediation and Cooperative Information Systems

    SciTech Connect

    PHILLIPS, LAURENCE R.; LINK, HAMILTON E.; GOLDSMITH, STEVEN Y.

    2002-06-02

    This report describes the results of research and development in the area of communication among disparate species of software agents. The two primary elements of the work are the formation of ontologies for use by software agents and the means by which software agents are instructed to carry out complex tasks that require interaction with other agents. This work was grounded in the areas of commercial transport and cybersecurity.

  4. Method for breast cancer classification based solely on morphological descriptors

    NASA Astrophysics Data System (ADS)

    Todd, Catherine A.; Naghdy, Golshah

    2004-05-01

    A decision support system has been developed to assist the radiologist during mammogram classification. In this paper, mass identification and segmentation methods are discussed in brief. Fuzzy region-growing techniques are applied to effectively segment the tumour candidate from surrounding breast tissue. Boundary extraction is implemented using a unit vector rotating about the mass core. The focus of this work is on the feature extraction and classification processes. Important information relating to the malignancy of a mass may be derived from its morphological properties. Mass shape and boundary roughness are primary features used in this research to discriminate between the two types of lesions. A subset from thirteen shape descriptors is input to a binary decision tree classifier that provides a final diagnosis of tumour malignancy. Features that combine to produce the most accurate result in distinguishing between malignant and benign lesions include: spiculation index, zero crossings, boundary roughness index and area-to-perimeter ratio. Using this method, a classification result of high sensitivity and specificity is achieved, with false-positive and falsenegative rates of 9.3% and 0% respectively.

  5. Measure of Landscape Heterogeneity by Agent-Based Methodology

    NASA Astrophysics Data System (ADS)

    Wirth, E.; Szabó, Gy.; Czinkóczky, A.

    2016-06-01

    With the rapid increase of the world's population, the efficient food production is one of the key factors of the human survival. Since biodiversity and heterogeneity is the basis of the sustainable agriculture, the authors tried to measure the heterogeneity of a chosen landscape. The EU farming and subsidizing policies (EEA, 2014) support landscape heterogeneity and diversity, nevertheless exact measurements and calculations apart from statistical parameters (standard deviation, mean), do not really exist. In the present paper the authors' goal is to find an objective, dynamic method that measures landscape heterogeneity. It is achieved with the so called agent-based modelling, where randomly dispatched dynamic scouts record the observed land cover parameters and sum up the features of a new type of land. During the simulation the agents collect a Monte Carlo integral as a diversity landscape potential which can be considered as the unit of the `greening' measure. As a final product of the ABM method, a landscape potential map is obtained that can serve as a tool for objective decision making to support agricultural diversity.

  6. Novel securinine derivatives as topoisomerase I based antitumor agents.

    PubMed

    Hou, Wen; Wang, Zhen-Ya; Peng, Cheng-Kang; Lin, Jing; Liu, Xin; Chang, Yi-Qun; Xu, Jun; Jiang, Ren-Wang; Lin, Hui; Sun, Ping-Hua; Chen, Wei-Min

    2016-10-21

    DNA topoisomerase I (Topo I) has been validated as a target for anticancer agents. In this study, a series of novel securinine derivatives bearing β'-hydroxy-α,β-unsaturated ketone moiety were designed and synthesized via a Baylis-Hillman reaction for screening as Topo I inhibitors and antitumor agents. Their topoisomerase I inhibitory activity as well as their cytotoxicity against four human cancer cell lines (A549, HeLa, HepG2, SH-SY5Y) were evaluated, and two pairs of diastereomers 4a-1 and 4a-6 with significant Topo I inhibitory activity and potent anti-proliferative activity against cancer cell lines were identified. The diastereomers were separated, and absolute configurations of five pairs of diastereomers were identified based on X-ray crystallographic analysis and circular dichroism (CD) spectra analysis. Further mechanism studies of the most active compounds 4a-1-R and 4a-1-S indicated that this kind of securinine derivative exhibits a different inhibitory mechanism from that of camptothecin, an established Topo I inhibitor. Unlike camptothecin, compounds 4a-1-R and 4a-1-S specifically inhibits the combination of Topo I and DNA rather than forming the drug-enzyme-DNA covalent ternary complex. In addition, molecular docking and molecular dynamic studies revealed the binding patterns of these compounds with Topo I. PMID:27344492

  7. Biodistribution of gadolinium-based contrast agents, including gadolinium deposition

    PubMed Central

    Aime, Silvio; Caravan, Peter

    2010-01-01

    The biodistribution of approved gadolinium (Gd) based contrast agents (GBCA) is reviewed. After intravenous injection GBCA distribute in the blood and the extracellular space and transiently through the excretory organs. Preclinical animal studies and the available clinical literature indicate that all these compounds are excreted intact. Elimination tends to be rapid and for the most part, complete. In renally insufficient patients the plasma elimination half-life increases substantially from hours to days depending on renal function. In patients with impaired renal function and nephrogenic systemic fibrosis (NSF), the agents gadodiamide, gadoversetamide, and gadopentetate dimeglumine have been shown to result in Gd deposition in the skin and internal organs. In these cases, it is likely that the Gd is no longer present as the GBCA, but this has still not been definitively shown. In preclinical models very small amounts of Gd are retained in the bone and liver, and the amount retained correlates with the kinetic and thermodynamic stability of the GBCA with respect to Gd release in vitro. The pattern of residual Gd deposition in NSF subjects may be different than that observed in preclinical rodent models. GBCA are designed to be used via intravenous administration. Altering the route of administration and/or the formulation of the GBCA can dramatically alter the biodistribution of the GBCA and can increase the likelihood of Gd deposition. PMID:19938038

  8. Knowledge-based algorithm for satellite image classification of urban wetlands

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofan; Ji, Wei

    2014-10-01

    It has been a challenge to accurately detect urban wetlands with remotely sensed data by means of pixel-based image classification. This technical difficulty results mainly from inadequate spatial resolutions of satellite imagery, spectral similarities between urban wetlands and adjacent land covers, and spatial complexity of wetlands in human transformed, heterogeneous urban landscapes. To address this issue, an image classification approach has been developed to improve the mapping accuracy of urban wetlands by integrating the pixel-based classification with a knowledge-based algorithm. The algorithm includes a set of decision rules of identifying wetland cover in relation to their elevation, spatial adjacencies, habitat conditions, hydro-geomorphological characteristics, and relevant geo-statistics. ERDAS Imagine software was used to develop the knowledge base and implement the classification. The study area is the metropolitan region of Kansas City, USA. SPOT satellite images of 1992, 2008, and 2010 were classified into four classes - wetland, farmland, built-up land, and forestland. The results suggest that the knowledge-based image classification approach can enhance urban wetland detection capabilities and classification accuracies with remotely sensed satellite imagery.

  9. Virtual images inspired consolidate collaborative representation-based classification method for face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Shigang; Zhang, Xinxin; Peng, Yali; Cao, Han

    2016-07-01

    The collaborative representation-based classification method performs well in the field of classification of high-dimensional images such as face recognition. It utilizes training samples from all classes to represent a test sample and assigns a class label to the test sample using the representation residuals. However, this method still suffers from the problem that limited number of training sample influences the classification accuracy when applied to image classification. In this paper, we propose a modified collaborative representation-based classification method (MCRC), which exploits novel virtual images and can obtain high classification accuracy. The procedure to produce virtual images is very simple but the use of them can bring surprising performance improvement. The virtual images can sufficiently denote the features of original face images in some case. Extensive experimental results doubtlessly demonstrate that the proposed method can effectively improve the classification accuracy. This is mainly attributed to the integration of the collaborative representation and the proposed feature-information dominated virtual images.

  10. Land Cover Classification from Full-Waveform LIDAR Data Based on Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Zhou, M.; Li, C. R.; Ma, L.; Guan, H. C.

    2016-06-01

    In this study, a land cover classification method based on multi-class Support Vector Machines (SVM) is presented to predict the types of land cover in Miyun area. The obtained backscattered full-waveforms were processed following a workflow of waveform pre-processing, waveform decomposition and feature extraction. The extracted features, which consist of distance, intensity, Full Width at Half Maximum (FWHM) and back scattering cross-section, were corrected and used as attributes for training data to generate the SVM prediction model. The SVM prediction model was applied to predict the types of land cover in Miyun area as ground, trees, buildings and farmland. The classification results of these four types of land covers were obtained based on the ground truth information according to the CCD image data of Miyun area. It showed that the proposed classification algorithm achieved an overall classification accuracy of 90.63%. In order to better explain the SVM classification results, the classification results of SVM method were compared with that of Artificial Neural Networks (ANNs) method and it showed that SVM method could achieve better classification results.

  11. Classification of high resolution imagery based on fusion of multiscale texture features

    NASA Astrophysics Data System (ADS)

    Liu, Jinxiu; Liu, Huiping; Lv, Ying; Xue, Xiaojuan

    2014-03-01

    In high resolution data classification process, combining texture features with spectral bands can effectively improve the classification accuracy. However, the window size which is difficult to choose is regarded as an important factor influencing overall classification accuracy in textural classification and current approaches to image texture analysis only depend on a single moving window which ignores different scale features of various land cover types. In this paper, we propose a new method based on the fusion of multiscale texture features to overcome these problems. The main steps in new method include the classification of fixed window size spectral/textural images from 3×3 to 15×15 and comparison of all the posterior possibility values for every pixel, as a result the biggest probability value is given to the pixel and the pixel belongs to a certain land cover type automatically. The proposed approach is tested on University of Pavia ROSIS data. The results indicate that the new method improve the classification accuracy compared to results of methods based on fixed window size textural classification.

  12. Adaptivity in Agent-Based Routing for Data Networks

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Kirshner, Sergey; Merz, Chris J.; Turner, Kagan

    2000-01-01

    Adaptivity, both of the individual agents and of the interaction structure among the agents, seems indispensable for scaling up multi-agent systems (MAS s) in noisy environments. One important consideration in designing adaptive agents is choosing their action spaces to be as amenable as possible to machine learning techniques, especially to reinforcement learning (RL) techniques. One important way to have the interaction structure connecting agents itself be adaptive is to have the intentions and/or actions of the agents be in the input spaces of the other agents, much as in Stackelberg games. We consider both kinds of adaptivity in the design of a MAS to control network packet routing. We demonstrate on the OPNET event-driven network simulator the perhaps surprising fact that simply changing the action space of the agents to be better suited to RL can result in very large improvements in their potential performance: at their best settings, our learning-amenable router agents achieve throughputs up to three and one half times better than that of the standard Bellman-Ford routing algorithm, even when the Bellman-Ford protocol traffic is maintained. We then demonstrate that much of that potential improvement can be realized by having the agents learn their settings when the agent interaction structure is itself adaptive.

  13. On agent-based modeling and computational social science

    PubMed Central

    Conte, Rosaria; Paolucci, Mario

    2014-01-01

    In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS. PMID:25071642

  14. Multi-agent-based Order Book Model of financial markets

    NASA Astrophysics Data System (ADS)

    Preis, T.; Golke, S.; Paul, W.; Schneider, J. J.

    2006-08-01

    We introduce a simple model for simulating financial markets, based on an order book, in which several agents trade one asset at a virtual exchange continuously. For a stationary market the structure of the model, the order flow rates of the different kinds of order types and the used price time priority matching algorithm produce only a diffusive price behavior. We show that a market trend, i.e. an asymmetric order flow of any type, leads to a non-trivial Hurst exponent for the price development, but not to "fat-tailed" return distributions. When one additionally couples the order entry depth to the prevailing trend, also the stylized empirical fact of "fat tails" can be reproduced by our Order Book Model.

  15. On agent-based modeling and computational social science.

    PubMed

    Conte, Rosaria; Paolucci, Mario

    2014-01-01

    In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS. PMID:25071642

  16. An agent-based mathematical model about carp aggregation

    NASA Astrophysics Data System (ADS)

    Liang, Yu; Wu, Chao

    2005-05-01

    This work presents an agent-based mathematical model to simulate the aggregation of carp, a harmful fish in North America. The referred mathematical model is derived from the following assumptions: (1) instead of the consensus among every carps involved in the aggregation, the aggregation of carp is completely a random and spontaneous physical behavior of numerous of independent carp; (2) carp aggregation is a collective effect of inter-carp and carp-environment interaction; (3) the inter-carp interaction can be derived from the statistical analytics about large-scale observed data. The proposed mathematical model is mainly based on empirical inter-carp force field, whose effect is featured with repulsion, parallel orientation, attraction, out-of-perception zone, and blind. Based on above mathematical model, the aggregation behavior of carp is formulated and preliminary simulation results about the aggregation of small number of carps within simple environment are provided. Further experiment-based validation about the mathematical model will be made in our future work.

  17. A Neuro-Fuzzy based System for Classification of Natural Textures

    NASA Astrophysics Data System (ADS)

    Jiji, G. Wiselin

    2016-06-01

    A statistical approach based on the coordinated clusters representation of images is used for classification and recognition of textured images. In this paper, two issues are being addressed; one is the extraction of texture features from the fuzzy texture spectrum in the chromatic and achromatic domains from each colour component histogram of natural texture images and the second issue is the concept of a fusion of multiple classifiers. The implementation of an advanced neuro-fuzzy learning scheme has been also adopted in this paper. The results of classification tests show the high performance of the proposed method that may have industrial application for texture classification, when compared with other works.

  18. Brazilian Cardiorespiratory Fitness Classification Based on Maximum Oxygen Consumption

    PubMed Central

    Herdy, Artur Haddad; Caixeta, Ananda

    2016-01-01

    Background Cardiopulmonary exercise test (CPET) is the most complete tool available to assess functional aerobic capacity (FAC). Maximum oxygen consumption (VO2 max), an important biomarker, reflects the real FAC. Objective To develop a cardiorespiratory fitness (CRF) classification based on VO2 max in a Brazilian sample of healthy and physically active individuals of both sexes. Methods We selected 2837 CEPT from 2837 individuals aged 15 to 74 years, distributed as follows: G1 (15 to 24); G2 (25 to 34); G3 (35 to 44); G4 (45 to 54); G5 (55 to 64) and G6 (65 to 74). Good CRF was the mean VO2 max obtained for each group, generating the following subclassification: Very Low (VL): VO2 < 50% of the mean; Low (L): 50% - 80%; Fair (F): 80% - 95%; Good (G): 95% -105%; Excellent (E) > 105%. Results Men VL < 50% L 50-80% F 80-95% G 95-105% E > 105% G1 < 25.30 25.30-40.48 40.49-48.07 48.08-53.13 > 53.13 G2 < 23.70 23.70-37.92 37.93-45.03 45.04-49.77 > 49.77 G3 < 22.70 22.70-36.32 36.33-43.13 43.14-47.67 > 47.67 G4 < 20.25 20.25-32.40 32.41-38.47 38.48-42.52 > 42.52 G5 < 17.54 17.65-28.24 28.25-33.53 33.54-37.06 > 37.06 G6 < 15 15.00-24.00 24.01-28.50 28.51-31.50 > 31.50 Women G1 < 19.45 19.45-31.12 31.13-36.95 36.96-40.84 > 40.85 G2 < 19.05 19.05-30.48 30.49-36.19 36.20-40.00 > 40.01 G3 < 17.45 17.45-27.92 27.93-33.15 33.16-34.08 > 34.09 G4 < 15.55 15.55-24.88 24.89-29.54 29.55-32.65 > 32.66 G5 < 14.30 14.30-22.88 22.89-27.17 27.18-30.03 > 30.04 G6 < 12.55 12.55-20.08 20.09-23.84 23.85-26.35 > 26.36 Conclusions This chart stratifies VO2 max measured on a treadmill in a robust Brazilian sample and can be used as an alternative for the real functional evaluation of physically and healthy individuals stratified by age and sex. PMID:27305285

  19. A New Approach To Secure Federated Information Bases Using Agent Technology.

    ERIC Educational Resources Information Center

    Weippi, Edgar; Klug, Ludwig; Essmayr, Wolfgang

    2003-01-01

    Discusses database agents which can be used to establish federated information bases by integrating heterogeneous databases. Highlights include characteristics of federated information bases, including incompatible database management systems, schemata, and frequently changing context; software agent technology; Java agents; system architecture;…

  20. A Systematic Review of Agent-Based Modelling and Simulation Applications in the Higher Education Domain

    ERIC Educational Resources Information Center

    Gu, X.; Blackmore, K. L.

    2015-01-01

    This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…

  1. The Impact of a Peer-Learning Agent Based on Pair Programming in a Programming Course

    ERIC Educational Resources Information Center

    Han, Keun-Woo; Lee, EunKyoung; Lee, YoungJun

    2010-01-01

    This paper analyzes the educational effects of a peer-learning agent based on pair programming in programming courses. A peer-learning agent system was developed to facilitate the learning of a programming language through the use of pair programming strategies. This system is based on the role of a peer-learning agent from pedagogical and…

  2. Fuzzy Rule-Based Classification System for Assessing Coronary Artery Disease

    PubMed Central

    Mohammadpour, Reza Ali; Abedi, Seyed Mohammad; Bagheri, Somayeh; Ghaemian, Ali

    2015-01-01

    The aim of this study was to determine the accuracy of fuzzy rule-based classification that could noninvasively predict CAD based on myocardial perfusion scan test and clinical-epidemiological variables. This was a cross-sectional study in which the characteristics, the results of myocardial perfusion scan (MPS), and coronary artery angiography of 115 patients, 62 (53.9%) males, in Mazandaran Heart Center in the north of Iran have been collected. We used membership functions for medical variables by reviewing the related literature. To improve the classification performance, we used Ishibuchi et al. and Nozaki et al. methods by adjusting the grade of certainty CFj of each rule. This system includes 144 rules and the antecedent part of all rules has more than one part. The coronary artery disease data used in this paper contained 115 samples. The data was classified into four classes, namely, classes 1 (normal), 2 (stenosis in one single vessel), 3 (stenosis in two vessels), and 4 (stenosis in three vessels) which had 39, 35, 17, and 24 subjects, respectively. The accuracy in the fuzzy classification based on if-then rule was 92.8 percent if classification result was considered based on rule selection by expert, while it was 91.9 when classification result was obtained according to the equation. To increase the classification rate, we deleted the extra rules to reduce the fuzzy rules after introducing the membership functions. PMID:26448783

  3. Classification of pulmonary airway disease based on mucosal color analysis

    NASA Astrophysics Data System (ADS)

    Suter, Melissa; Reinhardt, Joseph M.; Riker, David; Ferguson, John Scott; McLennan, Geoffrey

    2005-04-01

    Airway mucosal color changes occur in response to the development of bronchial diseases including lung cancer, cystic fibrosis, chronic bronchitis, emphysema and asthma. These associated changes are often visualized using standard macro-optical bronchoscopy techniques. A limitation to this form of assessment is that the subtle changes that indicate early stages in disease development may often be missed as a result of this highly subjective assessment, especially in inexperienced bronchoscopists. Tri-chromatic CCD chip bronchoscopes allow for digital color analysis of the pulmonary airway mucosa. This form of analysis may facilitate a greater understanding of airway disease response. A 2-step image classification approach is employed: the first step is to distinguish between healthy and diseased bronchoscope images and the second is to classify the detected abnormal images into 1 of 4 possible disease categories. A database of airway mucosal color constructed from healthy human volunteers is used as a standard against which statistical comparisons are made from mucosa with known apparent airway abnormalities. This approach demonstrates great promise as an effective detection and diagnosis tool to highlight potentially abnormal airway mucosa identifying a region possibly suited to further analysis via airway forceps biopsy, or newly developed micro-optical biopsy strategies. Following the identification of abnormal airway images a neural network is used to distinguish between the different disease classes. We have shown that classification of potentially diseased airway mucosa is possible through comparative color analysis of digital bronchoscope images. The combination of the two strategies appears to increase the classification accuracy in addition to greatly decreasing the computational time.

  4. Assessing the Performance of a Classification-Based Vulnerability Analysis Model.

    PubMed

    Wang, Tai-ran; Mousseau, Vincent; Pedroni, Nicola; Zio, Enrico

    2015-09-01

    In this article, a classification model based on the majority rule sorting (MR-Sort) method is employed to evaluate the vulnerability of safety-critical systems with respect to malevolent intentional acts. The model is built on the basis of a (limited-size) set of data representing (a priori known) vulnerability classification examples. The empirical construction of the classification model introduces a source of uncertainty into the vulnerability analysis process: a quantitative assessment of the performance of the classification model (in terms of accuracy and confidence in the assignments) is thus in order. Three different app oaches are here considered to this aim: (i) a model-retrieval-based approach, (ii) the bootstrap method, and (iii) the leave-one-out cross-validation technique. The analyses are presented with reference to an exemplificative case study involving the vulnerability assessment of nuclear power plants. PMID:25487957

  5. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  6. Classification of tea category using a portable electronic nose based on an odor imaging sensor array.

    PubMed

    Chen, Quansheng; Liu, Aiping; Zhao, Jiewen; Ouyang, Qin

    2013-10-01

    A developed portable electronic nose (E-nose) based on an odor imaging sensor array was successfully used for classification of three different fermentation degrees of tea (i.e., green tea, black tea, and Oolong tea). The odor imaging sensor array was fabricated by printing nine dyes, including porphyrin and metalloporphyrins, on the hydrophobic porous membrane. A color change profile for each sample was obtained by differentiating the image of sensor array before and after exposure to tea's volatile organic compounds (VOCs). Multivariate analysis was used for the classification of tea categories, and linear discriminant analysis (LDA) achieved 100% classification rate by leave-one-out cross-validation (LOOCV). This study demonstrates that the E-nose based on odor imaging sensor array has a high potential in the classification of tea category according to different fermentation degrees. PMID:23810847

  7. A review on ultrasound-based thyroid cancer tissue characterization and automated classification.

    PubMed

    Acharya, U R; Swapna, G; Sree, S V; Molinari, F; Gupta, S; Bardales, R H; Witkowska, A; Suri, J S

    2014-08-01

    In this paper, we review the different studies that developed Computer Aided Diagnostic (CAD) for automated classification of thyroid cancer into benign and malignant types. Specifically, we discuss the different types of features that are used to study and analyze the differences between benign and malignant thyroid nodules. These features can be broadly categorized into (a) the sonographic features from the ultrasound images, and (b) the non-clinical features extracted from the ultrasound images using statistical and data mining techniques. We also present a brief description of the commonly used classifiers in ultrasound based CAD systems. We then review the studies that used features based on the ultrasound images for thyroid nodule classification and highlight the limitations of such studies. We also discuss and review the techniques used in studies that used the non-clinical features for thyroid nodule classification and report the classification accuracies obtained in these studies. PMID:24206204

  8. Extreme Facial Expressions Classification Based on Reality Parameters

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Rad, Abdolvahab Ehsani; Rehman, Amjad; Altameem, Ayman

    2014-09-01

    Extreme expressions are really type of emotional expressions that are basically stimulated through the strong emotion. An example of those extreme expression is satisfied through tears. So to be able to provide these types of features; additional elements like fluid mechanism (particle system) plus some of physics techniques like (SPH) are introduced. The fusion of facile animation with SPH exhibits promising results. Accordingly, proposed fluid technique using facial animation is the real tenor for this research to get the complex expression, like laugh, smile, cry (tears emergence) or the sadness until cry strongly, as an extreme expression classification that's happens on the human face in some cases.

  9. Walking pattern analysis and SVM classification based on simulated gaits.

    PubMed

    Mao, Yuxiang; Saito, Masaru; Kanno, Takehiro; Wei, Daming; Muroi, Hiroyasu

    2008-01-01

    Three classes of walking patterns, normal, caution and danger, were simulated by tying elastic bands to joints of lower body. In order to distinguish one class from another, four local motions suggested by doctors were investigated stepwise, and differences between levels were evaluated using t-tests. The human adaptability in the tests was also evaluated. We improved average classification accuracy to 84.50% using multiclass support vector machine classifier and concluded that human adaptability is a factor that can cause obvious bias in contiguous data collections. PMID:19163856

  10. Correction of Alar Retraction Based on Frontal Classification.

    PubMed

    Kim, Jae Hoon; Song, Jin Woo; Park, Sung Wan; Bartlett, Erica; Nguyen, Anh H

    2015-11-01

    Among the various types of alar deformations in Asians, alar retraction not only has the highest occurrence rate, but is also very complicated to treat because the ala is supported only by cartilage and its soft tissue envelope cannot be easily stretched. As patients' knowledge of aesthetic procedures is becoming more extensive due to increased information dissemination through various media, doctors must give more accurate, logical explanations of the procedures to be performed and their anticipated results, with an emphasis on relevant anatomical features, accurate diagnoses, detailed classifications, and various appropriate methods of surgery. PMID:26648808

  11. A study of land use/land cover information extraction classification technology based on DTC

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Zheng, Yong-guo; Yang, Feng-jie; Jia, Wei-jie; Xiong, Chang-zhen

    2008-10-01

    Decision Tree Classification (DTC) is one organizational form of the multi-level recognition system, which changes the complicated classification into simple categories, and then gradually resolves it. The paper does LULC Decision Tree Classification research on some areas of Gansu Province in the west of China. With the mid-resolution remote sensing data as the main data resource, the authors adopt decision-making classification technology method, taking advantage of its character that it imitates the processing pattern of human judgment and thinking and its fault-tolerant character, and also build the decision tree LULC classical pattern. The research shows that the methods and techniques can increase the level of automation and accuracy of LULC information extraction, and better carry out LULC information extraction on the research areas. The main aspects of the research are as follows: 1. We collected training samples firstly, established a comprehensive database which is supported by remote sensing and ground data; 2. By utilizing CART system, and based on multiply sources and time phases remote sensing data and other assistance data, the DTC's technology effectively combined the unsupervised classification results with the experts' knowledge together. The method and procedure for distilling the decision tree information were specifically developed. 3. In designing the decision tree, based on the various object of types classification rules, we established and pruned DTC'S model for the purpose of achieving effective treatment of subdivision classification, and completed the land use and land cover classification of the research areas. The accuracy of evaluation showed that the classification accuracy reached upwards 80%.

  12. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  13. Comparing administered and market-based water allocation systems using an agent-based modeling approach

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Cai, X.; Wang, Z.

    2009-12-01

    It also has been well recognized that market-based systems can have significant advantages over administered systems for water allocation. However there are not many successful water markets around the world yet and administered systems exist commonly in water allocation management practice. This paradox has been under discussion for decades and still calls for attention for both research and practice. This paper explores some insights for the paradox and tries to address why market systems have not been widely implemented for water allocation. Adopting the theory of agent-based system we develop a consistent analytical model to interpret both systems. First we derive some theorems based on the analytical model, with respect to the necessary conditions for economic efficiency of water allocation. Following that the agent-based model is used to illustrate the coherence and difference between administered and market-based systems. The two systems are compared from three aspects: 1) the driving forces acting on the system state, 2) system efficiency, and 3) equity. Regarding economic efficiency, penalty on the violation of water use permits (or rights) under an administered system can lead to system-wide economic efficiency, as well as being acceptable by some agents, which follows the theory of the so-call rational violation. Ideal equity will be realized if penalty equals incentive with an administered system and if transaction costs are zero with a market system. The performances of both agents and the over system are explained with an administered system and market system, respectively. The performances of agents are subject to different mechanisms of interactions between agents under the two systems. The system emergency (i.e., system benefit, equilibrium market price, etc), resulting from the performance at the agent level, reflects the different mechanism of the two systems, the “invisible hand” with the market system and administrative measures (penalty

  14. A space-based classification system for RF transients

    SciTech Connect

    Moore, K.R.; Call, D.; Johnson, S.; Payne, T.; Ford, W.; Spencer, K.; Wilkerson, J.F.; Baumgart, C.

    1993-12-01

    The FORTE (Fast On-Orbit Recording of Transient Events) small satellite is scheduled for launch in mid 1995. The mission is to measure and classify VHF (30--300 MHz) electromagnetic pulses, primarily due to lightning, within a high noise environment dominated by continuous wave carriers such as TV and FM stations. The FORTE Event Classifier will use specialized hardware to implement signal processing and neural network algorithms that perform onboard classification of RF transients and carriers. Lightning events will also be characterized with optical data telemetered to the ground. A primary mission science goal is to develop a comprehensive understanding of the correlation between the optical flash and the VHF emissions from lightning. By combining FORTE measurements with ground measurements and/or active transmitters, other science issues can be addressed. Examples include the correlation of global precipitation rates with lightning flash rates and location, the effects of large scale structures within the ionosphere (such as traveling ionospheric disturbances and horizontal gradients in the total electron content) on the propagation of broad bandwidth RF signals, and various areas of lightning physics. Event classification is a key feature of the FORTE mission. Neural networks are promising candidates for this application. The authors describe the proposed FORTE Event Classifier flight system, which consists of a commercially available digital signal processing board and a custom board, and discuss work on signal processing and neural network algorithms.

  15. Texture Classification Using Local Pattern Based on Vector Quantization.

    PubMed

    Pan, Zhibin; Fan, Hongcheng; Zhang, Li

    2015-12-01

    Local binary pattern (LBP) is a simple and effective descriptor for texture classification. However, it has two main disadvantages: (1) different structural patterns sometimes have the same binary code and (2) it is sensitive to noise. In order to overcome these disadvantages, we propose a new local descriptor named local vector quantization pattern (LVQP). In LVQP, different kinds of texture images are chosen to train a local pattern codebook, where each different structural pattern is described by a unique codeword index. Contrarily to the original LBP and its many variants, LVQP does not quantize each neighborhood pixel separately to 0/1, but aims at quantizing the whole difference vector between the central pixel and its neighborhood pixels. Since LVQP deals with the structural pattern as a whole, it has a high discriminability and is less sensitive to noise. Our experimental results, achieved by using four representative texture databases of Outex, UIUC, CUReT, and Brodatz, show that the proposed LVQP method can improve classification accuracy significantly and is more robust to noise. PMID:26353370

  16. An adaptive unsupervised hyperspectral classification method based on Gaussian distribution

    NASA Astrophysics Data System (ADS)

    Yue, Jiang; Wu, Jing-wei; Zhang, Yi; Bai, Lian-fa

    2014-11-01

    In order to achieve adaptive unsupervised clustering in the high precision, a method using Gaussian distribution to fit the similarity of the inter-class and the noise distribution is proposed in this paper, and then the automatic segmentation threshold is determined by the fitting result. First, according with the similarity measure of the spectral curve, this method assumes that the target and the background both in Gaussian distribution, the distribution characteristics is obtained through fitting the similarity measure of minimum related windows and center pixels with Gaussian function, and then the adaptive threshold is achieved. Second, make use of the pixel minimum related windows to merge adjacent similar pixels into a picture-block, then the dimensionality reduction is completed and the non-supervised classification is realized. AVIRIS data and a set of hyperspectral data we caught are used to evaluate the performance of the proposed method. Experimental results show that the proposed algorithm not only realizes the adaptive but also outperforms K-MEANS and ISODATA on the classification accuracy, edge recognition and robustness.

  17. Classification of hospitals based on measured output: the VA system.

    PubMed

    Thomas, J W; Berki, S E; Wyszewianski, L; Ashcraft, M L

    1983-07-01

    Evaluation of hospital performance and improvement of resource allocation in hospital systems require a method for classifying hospitals on the basis of their output. Previous approaches to hospital classification relied largely on input characteristics. The authors propose and apply a procedure for classifying hospitals into groups where within-group hospitals are similar with respect to output. Direct measures of case-mix-adjusted discharges and outpatient visits are the principal measures of patient care output; other measures capture training and research functions. The component measures were weighted, and a composite output measure was calculated for each of the 162 hospitals in the Veterans Administration health care system. The output score then was used as the dependent variable in an Automatic Interaction Detector analysis, which partitioned the 162 hospitals into 10 groups, accounting for 85 per cent of the variance in the dependent variable. An extension of the output classification method is presented for illustration of how the difference between hospitals' actual operating costs and costs predicted on the basis of output can be used in defining isoefficiency groups. PMID:6350744

  18. Non-target adjacent stimuli classification improves performance of classical ERP-based brain computer interface

    NASA Astrophysics Data System (ADS)

    Ceballos, G. A.; Hernández, L. F.

    2015-04-01

    Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.

  19. Validation techniques of agent based modelling for geospatial simulations

    NASA Astrophysics Data System (ADS)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  20. Serious games experiment toward agent-based simulation

    USGS Publications Warehouse

    Wein, Anne; Labiosa, William

    2013-01-01

    We evaluate the potential for serious games to be used as a scientifically based decision-support product that supports the United States Geological Survey’s (USGS) mission--to provide integrated, unbiased scientific information that can make a substantial contribution to societal well-being for a wide variety of complex environmental challenges. Serious or pedagogical games are an engaging way to educate decisionmakers and stakeholders about environmental challenges that are usefully informed by natural and social scientific information and knowledge and can be designed to promote interactive learning and exploration in the face of large uncertainties, divergent values, and complex situations. We developed two serious games that use challenging environmental-planning issues to demonstrate and investigate the potential contributions of serious games to inform regional-planning decisions. Delta Skelta is a game emulating long-term integrated environmental planning in the Sacramento-San Joaquin Delta, California, that incorporates natural hazards (flooding and earthquakes) and consequences for California water supplies amidst conflicting water interests. Age of Ecology is a game that simulates interactions between economic and ecologic processes, as well as natural hazards while implementing agent-based modeling. The content of these games spans the USGS science mission areas related to water, ecosystems, natural hazards, land use, and climate change. We describe the games, reflect on design and informational aspects, and comment on their potential usefulness. During the process of developing these games, we identified various design trade-offs involving factual information, strategic thinking, game-winning criteria, elements of fun, number and type of players, time horizon, and uncertainty. We evaluate the two games in terms of accomplishments and limitations. Overall, we demonstrated the potential for these games to usefully represent scientific information

  1. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs

    NASA Astrophysics Data System (ADS)

    Haaf, Ezra; Barthel, Roland

    2016-04-01

    When assessing hydrogeological conditions at the regional scale, the analyst is often confronted with uncertainty of structures, inputs and processes while having to base inference on scarce and patchy data. Haaf and Barthel (2015) proposed a concept for handling this predicament by developing a groundwater systems classification framework, where information is transferred from similar, but well-explored and better understood to poorly described systems. The concept is based on the central hypothesis that similar systems react similarly to the same inputs and vice versa. It is conceptually related to PUB (Prediction in ungauged basins) where organization of systems and processes by quantitative methods is intended and used to improve understanding and prediction. Furthermore, using the framework it is expected that regional conceptual and numerical models can be checked or enriched by ensemble generated data from neighborhood-based estimators. In a first step, groundwater hydrographs from a large dataset in Southern Germany are compared in an effort to identify structural similarity in groundwater dynamics. A number of approaches to group hydrographs, mostly based on a similarity measure - which have previously only been used in local-scale studies, can be found in the literature. These are tested alongside different global feature extraction techniques. The resulting classifications are then compared to a visual "expert assessment"-based classification which serves as a reference. A ranking of the classification methods is carried out and differences shown. Selected groups from the classifications are related to geological descriptors. Here we present the most promising results from a comparison of classifications based on series correlation, different series distances and series features, such as the coefficients of the discrete Fourier transform and the intrinsic mode functions of empirical mode decomposition. Additionally, we show examples of classes

  2. [Proposals for social class classification based on the Spanish National Classification of Occupations 2011 using neo-Weberian and neo-Marxist approaches].

    PubMed

    Domingo-Salvany, Antònia; Bacigalupe, Amaia; Carrasco, José Miguel; Espelt, Albert; Ferrando, Josep; Borrell, Carme

    2013-01-01

    In Spain, the new National Classification of Occupations (Clasificación Nacional de Ocupaciones [CNO-2011]) is substantially different to the 1994 edition, and requires adaptation of occupational social classes for use in studies of health inequalities. This article presents two proposals to measure social class: the new classification of occupational social class (CSO-SEE12), based on the CNO-2011 and a neo-Weberian perspective, and a social class classification based on a neo-Marxist approach. The CSO-SEE12 is the result of a detailed review of the CNO-2011 codes. In contrast, the neo-Marxist classification is derived from variables related to capital and organizational and skill assets. The proposed CSO-SEE12 consists of seven classes that can be grouped into a smaller number of categories according to study needs. The neo-Marxist classification consists of 12 categories in which home owners are divided into three categories based on capital goods and employed persons are grouped into nine categories composed of organizational and skill assets. These proposals are complemented by a proposed classification of educational level that integrates the various curricula in Spain and provides correspondences with the International Standard Classification of Education. PMID:23394892

  3. Agent-Based Mapping of Credit Risk for Sustainable Microfinance

    PubMed Central

    Lee, Joung-Hun; Jusup, Marko; Podobnik, Boris; Iwasa, Yoh

    2015-01-01

    By drawing analogies with independent research areas, we propose an unorthodox framework for mapping microfinance credit risk---a major obstacle to the sustainability of lenders outreaching to the poor. Specifically, using the elements of network theory, we constructed an agent-based model that obeys the stylized rules of microfinance industry. We found that in a deteriorating economic environment confounded with adverse selection, a form of latent moral hazard may cause a regime shift from a high to a low loan payment probability. An after-the-fact recovery, when possible, required the economic environment to improve beyond that which led to the shift in the first place. These findings suggest a small set of measurable quantities for mapping microfinance credit risk and, consequently, for balancing the requirements to reasonably price loans and to operate on a fully self-financed basis. We illustrate how the proposed mapping works using a 10-year monthly data set from one of the best-known microfinance representatives, Grameen Bank in Bangladesh. Finally, we discuss an entirely new perspective for managing microfinance credit risk based on enticing spontaneous cooperation by building social capital. PMID:25945790

  4. Agent-based mapping of credit risk for sustainable microfinance.

    PubMed

    Lee, Joung-Hun; Jusup, Marko; Podobnik, Boris; Iwasa, Yoh

    2015-01-01

    By drawing analogies with independent research areas, we propose an unorthodox framework for mapping microfinance credit risk--a major obstacle to the sustainability of lenders outreaching to the poor. Specifically, using the elements of network theory, we constructed an agent-based model that obeys the stylized rules of microfinance industry. We found that in a deteriorating economic environment confounded with adverse selection, a form of latent moral hazard may cause a regime shift from a high to a low loan payment probability. An after-the-fact recovery, when possible, required the economic environment to improve beyond that which led to the shift in the first place. These findings suggest a small set of measurable quantities for mapping microfinance credit risk and, consequently, for balancing the requirements to reasonably price loans and to operate on a fully self-financed basis. We illustrate how the proposed mapping works using a 10-year monthly data set from one of the best-known microfinance representatives, Grameen Bank in Bangladesh. Finally, we discuss an entirely new perspective for managing microfinance credit risk based on enticing spontaneous cooperation by building social capital. PMID:25945790

  5. Persuasion Model and Its Evaluation Based on Positive Change Degree of Agent Emotion

    NASA Astrophysics Data System (ADS)

    Jinghua, Wu; Wenguang, Lu; Hailiang, Meng

    For it can meet needs of negotiation among organizations take place in different time and place, and for it can make its course more rationality and result more ideal, persuasion based on agent can improve cooperation among organizations well. Integrated emotion change in agent persuasion can further bring agent advantage of artificial intelligence into play. Emotion of agent persuasion is classified, and the concept of positive change degree is given. Based on this, persuasion model based on positive change degree of agent emotion is constructed, which is explained clearly through an example. Finally, the method of relative evaluation is given, which is also verified through a calculation example.

  6. A wavelet transform based feature extraction and classification of cardiac disorder.

    PubMed

    Sumathi, S; Beaulah, H Lilly; Vanithamani, R

    2014-09-01

    This paper approaches an intellectual diagnosis system using hybrid approach of Adaptive Neuro-Fuzzy Inference System (ANFIS) model for classification of Electrocardiogram (ECG) signals. This method is based on using Symlet Wavelet Transform for analyzing the ECG signals and extracting the parameters related to dangerous cardiac arrhythmias. In these particular parameters were used as input of ANFIS classifier, five most important types of ECG signals they are Normal Sinus Rhythm (NSR), Atrial Fibrillation (AF), Pre-Ventricular Contraction (PVC), Ventricular Fibrillation (VF), and Ventricular Flutter (VFLU) Myocardial Ischemia. The inclusion of ANFIS in the complex investigating algorithms yields very interesting recognition and classification capabilities across a broad spectrum of biomedical engineering. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies. The results give importance to that the proposed ANFIS model illustrates potential advantage in classifying the ECG signals. The classification accuracy of 98.24 % is achieved. PMID:25023652

  7. Synthesized Population Databases: A US Geospatial Database for Agent-Based Models

    PubMed Central

    Wheaton, William D.; Cajka, James C.; Chasteen, Bernadette M.; Wagener, Diane K.; Cooley, Philip C.; Ganapathi, Laxminarayana; Roberts, Douglas J.; Allpress, Justine L.

    2010-01-01

    Agent-based models simulate large-scale social systems. They assign behaviors and activities to “agents” (individuals) within the population being modeled and then allow the agents to interact with the environment and each other in complex simulations. Agent-based models are frequently used to simulate infectious disease outbreaks, among other uses. RTI used and extended an iterative proportional fitting method to generate a synthesized, geospatially explicit, human agent database that represents the US population in the 50 states and the District of Columbia in the year 2000. Each agent is assigned to a household; other agents make up the household occupants. For this database, RTI developed the methods for generating synthesized households and personsassigning agents to schools and workplaces so that complex interactions among agents as they go about their daily activities can be taken into accountgenerating synthesized human agents who occupy group quarters (military bases, college dormitories, prisons, nursing homes).In this report, we describe both the methods used to generate the synthesized population database and the final data structure and data content of the database. This information will provide researchers with the information they need to use the database in developing agent-based models. Portions of the synthesized agent database are available to any user upon request. RTI will extract a portion (a county, region, or state) of the database for users who wish to use this database in their own agent-based models. PMID:20505787

  8. Transport on Riemannian manifold for functional connectivity-based classification.

    PubMed

    Ng, Bernard; Dressler, Martin; Varoquaux, Gaël; Poline, Jean Baptiste; Greicius, Michael; Thirion, Bertrand

    2014-01-01

    We present a Riemannian approach for classifying fMRI connectivity patterns before and after intervention in longitudinal studies. A fundamental difficulty with using connectivity as features is that covariance matrices live on the positive semi-definite cone, which renders their elements inter-related. The implicit independent feature assumption in most classifier learning algorithms is thus violated. In this paper, we propose a matrix whitening transport for projecting the covariance estimates onto a common tangent space to reduce the statistical dependencies between their elements. We show on real data that our approach provides significantly higher classification accuracy than directly using Pearson's correlation. We further propose a non-parametric scheme for identifying significantly discriminative connections from classifier weights. Using this scheme, a number of neuroanatomically meaningful connections are found, whereas no significant connections are detected with pure permutation testing. PMID:25485405

  9. MEDLINE Abstracts Classification Based on Noun Phrases Extraction

    NASA Astrophysics Data System (ADS)

    Ruiz-Rico, Fernando; Vicedo, José-Luis; Rubio-Sánchez, María-Consuelo

    Many algorithms have come up in the last years to tackle automated text categorization. They have been exhaustively studied, leading to several variants and combinations not only in the particular procedures but also in the treatment of the input data. A widely used approach is representing documents as Bag-Of-Words (BOW) and weighting tokens with the TFIDF schema. Many researchers have thrown into precision and recall improvements and classification time reduction enriching BOW with stemming, n-grams, feature selection, noun phrases, metadata, weight normalization, etc. We contribute to this field with a novel combination of these techniques. For evaluation purposes, we provide comparisons to previous works with SVM against the simple BOW. The well known OHSUMED corpus is exploited and different sets of categories are selected, as previously done in the literature. The conclusion is that the proposed method can be successfully applied to existing binary classifiers such as SVM outperforming the mixture of BOW and TFIDF approaches.

  10. Material classification based on multi-band polarimetric images fusion

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-05-01

    Polarization imparted by surface reflections contains unique and discriminatory signatures which may augment spectral target-detection techniques. With the development of multi-band polarization imaging technology, it is becoming more and more important on how to integrate polarimetric, spatial and spectral information to improve target discrimination. In this study, investigations were performed on combining multi-band polarimetric images through false color mapping and wavelet integrated image fusion method. The objective of this effort was to extend the investigation of the use of polarized light to target detection and material classification. As there is great variation in polarization in and between each of the bandpasses, and this variation is comparable to the magnitude of the variation intensity. At the same time, the contrast in polarization is greater than for intensity, and that polarization contrast increases as intensity contrast decreases. It is also pointed out that chromaticity can be used to make targets stand out more clearly against background, and material can be divided into conductor and nonconductor through polarization information. So, through false color mapping, the difference part of polarimetric information between each of the bandpasses and common part of polarimetric information in each of the bandpasses are combined, in the resulting image the conductor and nonconductor are assigned different color. Then panchromatic polarimetric images are fused with resulting image through wavelet decomposition, the final fused image have more detail information and more easy identification. This study demonstrated, using digital image data collected by imaging spectropolarimeter, multi-band imaging polarimetry is likely to provide an advantage in target detection and material classification.

  11. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented. PMID:16764265

  12. Wavelet-SVM classifier based on texture features for land cover classification

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Wu, Bingfang; Zhu, Jianjun; Zhou, Yuemin; Zhu, Liang

    2008-12-01

    Texture features are recognized to be a special hint in images, which represent the spatial relations of the gray pixels. Nowadays, the applications of the texture analysis in image classification spread abroad. Combined with wavelet multi-resolution analysis or support vector machine statistical learning theory, texture analysis could improve the quality of classification increasingly. In this paper, we focus on the land cover for the Three Gorges reservoir using remote sensing data SPOT-5, a new classification method, wavelet-SVM classifier based on texture features, is employed for this study. Compare to the traditional maximum likelihood classifier and SVM classifier only use spectrum feature, this method produces more accurate classification results. According to the real environment of the Three Gorges reservoir land cover, a best texture group is selected from several texture features. Decompose the image at different levels, which is one of the main advantage of wavelet, and then compute the texture features in every sub-image, and the next step is eliminating the redundant, every texture features are centralized on the first principal components using principal component analysis. Finally, with the first principal components inputted, we can get the classification result using SVM in every decomposition scale, but what the problem we couldn't overlook is how to select the best SVM parameters. So an iterative rule based on the classification accuracy is induced, the more accuracy, the proper parameters.

  13. Mobile Agents for Web-Based Systems Management.

    ERIC Educational Resources Information Center

    Bellavista, Paolo; Corradi, Antonio; Tarantino, Fabio; Stefanelli, Cesare

    1999-01-01

    Discussion of mobile agent technology that overcomes the limits of traditional approaches to the management of global Web systems focuses on the MAMAS (mobile agents for the management of applications and systems) management environment that uses JAVA as its implementation language. Stresses security and interoperability. (Author/LRW)

  14. Interface Agent for Computer-based Tutoring Systems.

    ERIC Educational Resources Information Center

    Dang, Trang; Ghenniwa, Hamada; Kamel, Mohamed

    1999-01-01

    Proposes an interface agent for intelligent tutoring systems that creates a collaborative learning environment between the learner and the tutoring software. Describes implementation of a prototype using the IBM Agent Builder Environment Toolkit to use with an intelligent tutoring system for algebra and considers benefits in a lifelong learning…

  15. "Campus" - An Agent-Based Platform for Distance Education.

    ERIC Educational Resources Information Center

    Westhoff, Dirk; Unger, Claus

    This paper presents "Campus," an environment that allows University of Hagen (Germany) students to connect briefly to the Internet but remain represented by personalized, autonomous agents that can fulfill a variety of information, communication, planning, and cooperation tasks. A brief survey is presented of existing mobile agent system…

  16. A novel sparse coding algorithm for classification of tumors based on gene expression data.

    PubMed

    Kolali Khormuji, Morteza; Bazrafkan, Mehrnoosh

    2016-06-01

    High-dimensional genomic and proteomic data play an important role in many applications in medicine such as prognosis of diseases, diagnosis, prevention and molecular biology, to name a few. Classifying such data is a challenging task due to the various issues such as curse of dimensionality, noise and redundancy. Recently, some researchers have used the sparse representation (SR) techniques to analyze high-dimensional biological data in various applications in classification of cancer patients based on gene expression datasets. A common problem with all SR-based biological data classification methods is that they cannot utilize the topological (geometrical) structure of data. More precisely, these methods transfer the data into sparse feature space without preserving the local structure of data points. In this paper, we proposed a novel SR-based cancer classification algorithm based on gene expression data that takes into account the geometrical information of all data. Precisely speaking, we incorporate the local linear embedding algorithm into the sparse coding framework, by which we can preserve the geometrical structure of all data. For performance comparison, we applied our algorithm on six tumor gene expression datasets, by which we demonstrate that the proposed method achieves higher classification accuracy than state-of-the-art SR-based tumor classification algorithms. PMID:26337064

  17. Geomorphological feature extraction from a digital elevation model through fuzzy knowledge-based classification

    NASA Astrophysics Data System (ADS)

    Argialas, Demetre P.; Tzotsos, Angelos

    2003-03-01

    The objective of this research was the investigation of advanced image analysis methods for geomorphological mapping. Methods employed included multiresolution segmentation of the Digital Elevation Model (DEM) GTOPO30 and fuzzy knowledge based classification of the segmented DEM into three geomorphological classes: mountain ranges, piedmonts and basins. The study area was a segment of the Basin and Range Physiographic Province in Nevada, USA. The implementation was made in eCognition. In particular, the segmentation of GTOPO30 resulted into primitive objects. The knowledge-based classification of the primitive objects based on their elevation and shape parameters, resulted in the extraction of the geomorphological features. The resulted boundaries in comparison to those by previous studies were found satisfactory. It is concluded that geomorphological feature extraction can be carried out through fuzzy knowledge based classification as implemented in eCognition.

  18. Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    NASA Astrophysics Data System (ADS)

    Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  19. Object-based classification as an alternative approach to the traditional pixel-based classification to identify potential habitat of the grasshopper sparrow.

    PubMed

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided. PMID:17985180

  20. Object-Based Classification as an Alternative Approach to the Traditional Pixel-Based Classification to Identify Potential Habitat of the Grasshopper Sparrow

    NASA Astrophysics Data System (ADS)

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  1. Permutations of Control: Cognitive Considerations for Agent-Based Learning Environments.

    ERIC Educational Resources Information Center

    Baylor, Amy L.

    2001-01-01

    Discussion of intelligent agents and their use in computer learning environments focuses on cognitive considerations. Presents four dimension of control that should be considered in designing agent-based learning environments: learner control, from constructivist to instructivist; feedback; relationship of learner to agent; and learner confidence…

  2. Children's Agentive Orientations in Play-Based and Academically Focused Preschools in Hong Kong

    ERIC Educational Resources Information Center

    Cheng Pui-Wah, Doris; Reunamo, Jyrki; Cooper, Paul; Liu, Karen; Vong, Keang-ieng Peggy

    2015-01-01

    The article describes a comparative case study on children's agentive orientations in two Hong Kong preschools, one is play-based and the other is academically focused. Agentive orientations were measured using Reunamo's interview tool, which focuses on children's uses of accommodative and agentive orientations in everyday situations. The findings…

  3. Chinese wine classification system based on micrograph using combination of shape and structure features

    NASA Astrophysics Data System (ADS)

    Wan, Yi

    2011-06-01

    Chinese wines can be classification or graded by the micrographs. Micrographs of Chinese wines show floccules, stick and granule of variant shape and size. Different wines have variant microstructure and micrographs, we study the classification of Chinese wines based on the micrographs. Shape and structure of wines' particles in microstructure is the most important feature for recognition and classification of wines. So we introduce a feature extraction method which can describe the structure and region shape of micrograph efficiently. First, the micrographs are enhanced using total variation denoising, and segmented using a modified Otsu's method based on the Rayleigh Distribution. Then features are extracted using proposed method in the paper based on area, perimeter and traditional shape feature. Eight kinds total 26 features are selected. Finally, Chinese wine classification system based on micrograph using combination of shape and structure features and BP neural network have been presented. We compare the recognition results for different choices of features (traditional shape features or proposed features). The experimental results show that the better classification rate have been achieved using the combinational features proposed in this paper.

  4. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection

    PubMed Central

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification. PMID:26339638

  5. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. PMID:26081920

  6. Illusory versus genuine control in agent-based games

    NASA Astrophysics Data System (ADS)

    Satinover, J. B.; Sornette, D.

    2009-02-01

    In the Minority, Majority and Dollar Games (MG, MAJG, G) agents compete for rewards, acting in accord with the previously best-performing of their strategies. Different aspects/kinds of real-world markets are modelled by these games. In the MG, agents compete for scarce resources; in the MAJG agents imitate the group to exploit a trend; in the G agents attempt to predict and benefit both from trends and changes in the direction of a market. It has been previously shown that in the MG for a reasonable number of preliminary time steps preceding equilibrium (Time Horizon MG, THMG), agents’ attempt to optimize their gains by active strategy selection is “illusory”: the hypothetical gains of their strategies is greater on average than agents’ actual average gains. Furthermore, if a small proportion of agents deliberately choose and act in accord with their seemingly worst performing strategy, these outperform all other agents on average, and even attain mean positive gain, otherwise rare for agents in the MG. This latter phenomenon raises the question as to how well the optimization procedure works in the THMAJG and THG. We demonstrate that the illusion of control is absent in THMAJG and THG. This provides further clarification of the kinds of situations subject to genuine control, and those not, in set-ups a priori defined to emphasize the importance of optimization.

  7. Dynamic calibration of agent-based models using data assimilation.

    PubMed

    Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S

    2016-04-01

    A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds. PMID:27152214

  8. Patient-centered appointment scheduling using agent-based simulation.

    PubMed

    Turkcan, Ayten; Toscos, Tammy; Doebbeling, Brad N

    2014-01-01

    Enhanced access and continuity are key components of patient-centered care. Existing studies show that several interventions such as providing same day appointments, walk-in services, after-hours care, and group appointments, have been used to redesign the healthcare systems for improved access to primary care. However, an intervention focusing on a single component of care delivery (i.e. improving access to acute care) might have a negative impact other components of the system (i.e. reduced continuity of care for chronic patients). Therefore, primary care clinics should consider implementing multiple interventions tailored for their patient population needs. We collected rapid ethnography and observations to better understand clinic workflow and key constraints. We then developed an agent-based simulation model that includes all access modalities (appointments, walk-ins, and after-hours access), incorporate resources and key constraints and determine the best appointment scheduling method that improves access and continuity of care. This paper demonstrates the value of simulation models to test a variety of alternative strategies to improve access to care through scheduling. PMID:25954423

  9. Advanced nanoelectronic architectures for THz-based biological agent detection

    NASA Astrophysics Data System (ADS)

    Woolard, Dwight L.; Jensen, James O.

    2009-02-01

    The U.S. Army Research Office (ARO) and the U.S. Army Edgewood Chemical Biological Center (ECBC) jointly lead and support novel research programs that are advancing the state-of-the-art in nanoelectronic engineering in application areas that have relevance to national defense and security. One fundamental research area that is presently being emphasized by ARO and ECBC is the exploratory investigation of new bio-molecular architectural concepts that can be used to achieve rapid, reagent-less detection and discrimination of biological warfare (BW) agents, through the control of multi-photon and multi-wavelength processes at the nanoscale. This paper will overview an ARO/ECBC led multidisciplinary research program presently under the support of the U.S. Defense Threat Reduction Agency (DTRA) that seeks to develop new devices and nanoelectronic architectures that are effective for extracting THz signatures from target bio-molecules. Here, emphasis will be placed on the new nanosensor concepts and THz/Optical measurement methodologies for spectral-based sequencing/identification of genetic molecules.

  10. Biological agent detection based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Mudigonda, Naga R.; Kacelenga, Ray

    2006-05-01

    This paper presents an algorithm, based on principal component analysis for the detection of biological threats using General Dynamics Canada's 4WARN Sentry 3000 biodetection system. The proposed method employs a statistical method for estimating background biological activity so as to make the algorithm adaptive to varying background situations. The method attempts to characterize the pattern of change that occurs in the fluorescent particle counts distribution and uses the information to suppress false-alarms. The performance of the method was evaluated using a total of 68 tests including 51 releases of Bacillus Globigii (BG), six releases of BG in the presence of obscurants, six releases of obscurants only, and five releases of ovalbumin at the Ambient Breeze Tunnel Test facility, Battelle, OH. The peak one-minute average concentration of BG used in the tests ranged from 10 - 65 Agent Containing Particles per Liter of Air (ACPLA). The obscurants used in the tests included diesel smoke, white grenade smoke, and salt solution. The method successfully detected BG at a sensitivity of 10 ACPLA and resulted in an overall probability of detection of 94% for BG without generating any false-alarms for obscurants at a detection threshold of 0.6 on a scale of 0 to 1. Also, the method successfully detected BG in the presence of diesel smoke and salt water fumes. The system successfully responded to all the five ovalbumin releases with noticeable trends in algorithm output and alarmed for two releases at the selected detection threshold.

  11. E-laboratories : agent-based modeling of electricity markets.

    SciTech Connect

    North, M.; Conzelmann, G.; Koritarov, V.; Macal, C.; Thimmapuram, P.; Veselka, T.

    2002-05-03

    Electricity markets are complex adaptive systems that operate under a wide range of rules that span a variety of time scales. These rules are imposed both from above by society and below by physics. Many electricity markets are undergoing or are about to undergo a transition from centrally regulated systems to decentralized markets. Furthermore, several electricity markets have recently undergone this transition with extremely unsatisfactory results, most notably in California. These high stakes transitions require the introduction of largely untested regulatory structures. Suitable laboratories that can be used to test regulatory structures before they are applied to real systems are needed. Agent-based models can provide such electronic laboratories or ''e-laboratories.'' To better understand the requirements of an electricity market e-laboratory, a live electricity market simulation was created. This experience helped to shape the development of the Electricity Market Complex Adaptive Systems (EMCAS) model. To explore EMCAS' potential as an e-laboratory, several variations of the live simulation were created. These variations probed the possible effects of changing power plant outages and price setting rules on electricity market prices.

  12. Agents Based e-Commerce and Securing Exchanged Information

    NASA Astrophysics Data System (ADS)

    Al-Jaljouli, Raja; Abawajy, Jemal

    Mobile agents have been implemented in e-Commerce to search and filter information of interest from electronic markets. When the information is very sensitive and critical, it is important to develop a novel security protocol that can efficiently protect the information from malicious tampering as well as unauthorized disclosure or at least detect any malicious act of intruders. In this chapter, we describe robust security techniques that ensure a sound security of information gathered throughout agent’s itinerary against various security attacks, as well as truncation attacks. A sound security protocol is described, which implements the various security techniques that would jointly prevent or at least detect any malicious act of intruders. We reason about the soundness of the protocol usingSymbolic Trace Analyzer (STA), a formal verification tool that is based on symbolic techniques. We analyze the protocol in key configurations and show that it is free of flaws. We also show that the protocol fulfils the various security requirements of exchanged information in MAS, including data-integrity, data-confidentiality, data-authenticity, origin confidentiality and data non-repudiability.

  13. Agent-based modeling to simulate the dengue spread

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Tao, Haiyan; Ye, Zhiwei

    2008-10-01

    In this paper, we introduce a novel method ABM in simulating the unique process for the dengue spread. Dengue is an acute infectious disease with a long history of over 200 years. Unlike the diseases that can be transmitted directly from person to person, dengue spreads through a must vector of mosquitoes. There is still no any special effective medicine and vaccine for dengue up till now. The best way to prevent dengue spread is to take precautions beforehand. Thus, it is crucial to detect and study the dynamic process of dengue spread that closely relates to human-environment interactions where Agent-Based Modeling (ABM) effectively works. The model attempts to simulate the dengue spread in a more realistic way in the bottom-up way, and to overcome the limitation of ABM, namely overlooking the influence of geographic and environmental factors. Considering the influence of environment, Aedes aegypti ecology and other epidemiological characteristics of dengue spread, ABM can be regarded as a useful way to simulate the whole process so as to disclose the essence of the evolution of dengue spread.

  14. Agent-Based Knowledge Discovery for Modeling and Simulation

    SciTech Connect

    Haack, Jereme N.; Cowell, Andrew J.; Marshall, Eric J.; Fligg, Alan K.; Gregory, Michelle L.; McGrath, Liam R.

    2009-09-15

    This paper describes an approach to using agent technology to extend the automated discovery mechanism of the Knowledge Encapsulation Framework (KEF). KEF is a suite of tools to enable the linking of knowledge inputs (relevant, domain-specific evidence) to modeling and simulation projects, as well as other domains that require an effective collaborative workspace for knowledge-based tasks. This framework can be used to capture evidence (e.g., trusted material such as journal articles and government reports), discover new evidence (covering both trusted and social media), enable discussions surrounding domain-specific topics and provide automatically generated semantic annotations for improved corpus investigation. The current KEF implementation is presented within a semantic wiki environment, providing a simple but powerful collaborative space for team members to review, annotate, discuss and align evidence with their modeling frameworks. The novelty in this approach lies in the combination of automatically tagged and user-vetted resources, which increases user trust in the environment, leading to ease of adoption for the collaborative environment.

  15. Dynamic calibration of agent-based models using data assimilation

    PubMed Central

    Ward, Jonathan A.; Evans, Andrew J.; Malleson, Nicolas S.

    2016-01-01

    A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds. PMID:27152214

  16. Agent-Based Crowd Simulation Considering Emotion Contagion for Emergency Evacuation Problem

    NASA Astrophysics Data System (ADS)

    Faroqi, H.; Mesgari, M.-S.

    2015-12-01

    During emergencies, emotions greatly affect human behaviour. For more realistic multi-agent systems in simulations of emergency evacuations, it is important to incorporate emotions and their effects on the agents. In few words, emotional contagion is a process in which a person or group influences the emotions or behavior of another person or group through the conscious or unconscious induction of emotion states and behavioral attitudes. In this study, we simulate an emergency situation in an open square area with three exits considering Adults and Children agents with different behavior. Also, Security agents are considered in order to guide Adults and Children for finding the exits and be calm. Six levels of emotion levels are considered for each agent in different scenarios and situations. The agent-based simulated model initialize with the random scattering of agent populations and then when an alarm occurs, each agent react to the situation based on its and neighbors current circumstances. The main goal of each agent is firstly to find the exit, and then help other agents to find their ways. Numbers of exited agents along with their emotion levels and damaged agents are compared in different scenarios with different initialization in order to evaluate the achieved results of the simulated model. NetLogo 5.2 is used as the multi-agent simulation framework with R language as the developing language.

  17. Agent-based simulation of building evacuation using a grid graph-based model

    NASA Astrophysics Data System (ADS)

    Tan, L.; Lin, H.; Hu, M.; Che, W.

    2014-02-01

    Shifting from macroscope models to microscope models, the agent-based approach has been widely used to model crowd evacuation as more attentions are paid on individualized behaviour. Since indoor evacuation behaviour is closely related to spatial features of the building, effective representation of indoor space is essential for the simulation of building evacuation. The traditional cell-based representation has limitations in reflecting spatial structure and is not suitable for topology analysis. Aiming at incorporating powerful topology analysis functions of GIS to facilitate agent-based simulation of building evacuation, we used a grid graph-based model in this study to represent the indoor space. Such model allows us to establish an evacuation network at a micro level. Potential escape routes from each node thus could be analysed through GIS functions of network analysis considering both the spatial structure and route capacity. This would better support agent-based modelling of evacuees' behaviour including route choice and local movements. As a case study, we conducted a simulation of emergency evacuation from the second floor of an official building using Agent Analyst as the simulation platform. The results demonstrate the feasibility of the proposed method, as well as the potential of GIS in visualizing and analysing simulation results.

  18. Is it time for brushless scrubbing with an alcohol-based agent?

    PubMed

    Gruendemann, B J; Bjerke, N B

    2001-12-01

    The practice of surgical scrubbing in perioperative settings is changing rapidly. This article presents information about eliminating the traditional scrub brush technique and using an alcohol formulation for surgical hand scrubs. Also covered are antimicrobial agents, relevant US Food and Drug Administration classifications, skin and fingernail care, and implementation of changes. The article challenges surgical team members to evaluate a new and different approach to surgical hand scrubbing. PMID:11795059

  19. Cell morphology-based classification of red blood cells using holographic imaging informatics

    PubMed Central

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2016-01-01

    We present methods that automatically select a linear or nonlinear classifier for red blood cell (RBC) classification by analyzing the equality of the covariance matrices in Gabor-filtered holographic images. First, the phase images of the RBCs are numerically reconstructed from their holograms, which are recorded using off-axis digital holographic microscopy (DHM). Second, each RBC is segmented using a marker-controlled watershed transform algorithm and the inner part of the RBC is identified and analyzed. Third, the Gabor wavelet transform is applied to the segmented cells to extract a series of features, which then undergo a multivariate statistical test to evaluate the equality of the covariance matrices of the different shapes of the RBCs using selected features. When these covariance matrices are not equal, a nonlinear classification scheme based on quadratic functions is applied; otherwise, a linear classification is applied. We used the stomatocyte, discocyte, and echinocyte RBC for classifier training and testing. Simulation results demonstrated that 10 of the 14 RBC features are useful in RBC classification. Experimental results also revealed that the covariance matrices of the three main RBC groups are not equal and that a nonlinear classification method has a much lower misclassification rate. The proposed automated RBC classification method has the potential for use in drug testing and the diagnosis of RBC-related diseases. PMID:27375953

  20. Cell morphology-based classification of red blood cells using holographic imaging informatics.

    PubMed

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2016-06-01

    We present methods that automatically select a linear or nonlinear classifier for red blood cell (RBC) classification by analyzing the equality of the covariance matrices in Gabor-filtered holographic images. First, the phase images of the RBCs are numerically reconstructed from their holograms, which are recorded using off-axis digital holographic microscopy (DHM). Second, each RBC is segmented using a marker-controlled watershed transform algorithm and the inner part of the RBC is identified and analyzed. Third, the Gabor wavelet transform is applied to the segmented cells to extract a series of features, which then undergo a multivariate statistical test to evaluate the equality of the covariance matrices of the different shapes of the RBCs using selected features. When these covariance matrices are not equal, a nonlinear classification scheme based on quadratic functions is applied; otherwise, a linear classification is applied. We used the stomatocyte, discocyte, and echinocyte RBC for classifier training and testing. Simulation results demonstrated that 10 of the 14 RBC features are useful in RBC classification. Experimental results also revealed that the covariance matrices of the three main RBC groups are not equal and that a nonlinear classification method has a much lower misclassification rate. The proposed automated RBC classification method has the potential for use in drug testing and the diagnosis of RBC-related diseases. PMID:27375953

  1. Fuzzy-logic-based hybrid locomotion mode classification for an active pelvis orthosis: Preliminary results.

    PubMed

    Yuan, Kebin; Parri, Andrea; Yan, Tingfang; Wang, Long; Munih, Marko; Vitiello, Nicola; Wang, Qining

    2015-08-01

    In this paper, we present a fuzzy-logic-based hybrid locomotion mode classification method for an active pelvis orthosis. Locomotion information measured by the onboard hip joint angle sensors and the pressure insoles is used to classify five locomotion modes, including two static modes (sitting, standing still), and three dynamic modes (level-ground walking, ascending stairs, and descending stairs). The proposed method classifies these two kinds of modes first by monitoring the variation of the relative hip joint angle between the two legs within a specific period. Static states are then classified by the time-based absolute hip joint angle. As for dynamic modes, a fuzzy-logic based method is proposed for the classification. Preliminary experimental results with three able-bodied subjects achieve an off-line classification accuracy higher than 99.49%. PMID:26737144

  2. Interferogram-based breast tumor classification using microwave-induced thermoacoustic imaging.

    PubMed

    Hao Nan; Haghi, Benyamin Allahgholizadeh; Arbabian, Amin

    2015-08-01

    Microwave-induced thermoacoustic (TA) imaging combines the dielectric/conductivity contrast in the microwave range with the high resolution of ultrasound imaging. Lack of ionizing radiation exposure in TA imaging makes this technique suitable for frequent screening applications, as with breast cancer screening. In this paper we demonstrate breast tumor classification based on TA imaging. The sensitivity of the signal-based classification algorithm to errors in the estimation of tumor locations is investigated. To reduce this sensitivity, we propose to use the interferogram of received pressure waves as the feature basis used for classification, and demonstrate the robustness based on a finite-difference time-domain (FDTD) simulation framework. PMID:26736853

  3. The Comprehensive AOCMF Classification: Skull Base and Cranial Vault Fractures – Level 2 and 3 Tutorial

    PubMed Central

    Ieva, Antonio Di; Audigé, Laurent; Kellman, Robert M.; Shumrick, Kevin A.; Ringl, Helmut; Prein, Joachim; Matula, Christian

    2014-01-01

    The AOCMF Classification Group developed a hierarchical three-level craniomaxillofacial classification system with increasing level of complexity and details. The highest level 1 system distinguish four major anatomical units, including the mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). This tutorial presents the level 2 and more detailed level 3 systems for the skull base and cranial vault units. The level 2 system describes fracture location outlining the topographic boundaries of the anatomic regions, considering in particular the endocranial and exocranial skull base surfaces. The endocranial skull base is divided into nine regions; a central skull base adjoining a left and right side are divided into the anterior, middle, and posterior skull base. The exocranial skull base surface and cranial vault are divided in regions defined by the names of the bones involved: frontal, parietal, temporal, sphenoid, and occipital bones. The level 3 system allows assessing fracture morphology described by the presence of fracture fragmentation, displacement, and bone loss. A documentation of associated intracranial diagnostic features is proposed. This tutorial is organized in a sequence of sections dealing with the description of the classification system with illustrations of the topographical skull base and cranial vault regions along with rules for fracture location and coding, a series of case examples with clinical imaging and a general discussion on the design of this classification. PMID:25489394

  4. Using Web-Based Key Character and Classification Instruction for Teaching Undergraduate Students Insect Identification

    ERIC Educational Resources Information Center

    Golick, Douglas A.; Heng-Moss, Tiffany M.; Steckelberg, Allen L.; Brooks, David. W.; Higley, Leon G.; Fowler, David

    2013-01-01

    The purpose of the study was to determine whether undergraduate students receiving web-based instruction based on traditional, key character, or classification instruction differed in their performance of insect identification tasks. All groups showed a significant improvement in insect identifications on pre- and post-two-dimensional picture…

  5. New classification scheme for ozone monitoring stations based on frequency distribution of hourly data.

    PubMed

    Tapia, O; Escudero, M; Lozano, Á; Anzano, J; Mantilla, E

    2016-02-15

    According to European Union (EU) legislation, ozone (O3) monitoring sites can be classified regarding their location (rural background, rural, suburban, urban) or based on the presence of emission sources (background, traffic, industrial). There have been attempts to improve these classifications aiming to reduce their ambiguity and subjectivity, but although scientifically sound, they lack the simplicity needed for operational purposes. We present a simple methodology for classifying O3 stations based on the characteristics of frequency distribution curves which are indicative of the actual impact of combustion sources emitting NO that consumes O3 via titration. Four classes are identified using 1998-2012 hourly data from 72 stations widely distributed in mainland Spain and the Balearic Islands. Types 1 and 2 present unimodal bell-shaped distribution with very low amount of data near zero reflecting a limited influence of combustion sources while Type 4 has a primary mode close to zero, showing the impact of combustion sources, and a minor mode for higher concentrations. Type 3 stations present bimodal distributions with the main mode in the higher levels. We propose a quantitative metric based on the Gini index with the objective of reproducing this classification and finding empirical ranges potentially useful for future classifications. The analysis of the correspondence with the EUROAIRNET classes for the 72 stations reveals that the proposed scheme is only dependent on the impact of combustion sources and not on climatic or orographic aspects. It is demonstrated that this classification is robust since in 87% of the occasions the classification obtained for individual years coincide with the global classification obtained for the 1998-2012 period. Finally, case studies showing the applicability of the new classification scheme for assessing the impact on O3 of a station relocation and performing a critical evaluation of an air quality monitoring network are

  6. Mercury Control with Calcium-Based Sorbents and Oxidizing Agents

    SciTech Connect

    Thomas K. Gale

    2005-07-01

    This Final Report contains the test descriptions, results, analysis, correlations, theoretical descriptions, and model derivations produced from many different investigations performed on a project funded by the U.S. Department of Energy, to investigate calcium-based sorbents and injection of oxidizing agents for the removal of mercury. Among the technologies were (a) calcium-based sorbents in general, (b) oxidant-additive sorbents developed originally at the EPA, and (c) optimized calcium/carbon synergism for mercury-removal enhancement. In addition, (d) sodium-tetrasulfide injection was found to effectively capture both forms of mercury across baghouses and ESPs, and has since been demonstrated at a slipstream treating PRB coal. It has been shown that sodium-tetrasulfide had little impact on the foam index of PRB flyash, which may indicate that sodium-tetrasulfide injection could be used at power plants without affecting flyash sales. Another technology, (e) coal blending, was shown to be an effective means of increasing mercury removal, by optimizing the concentration of calcium and carbon in the flyash. In addition to the investigation and validation of multiple mercury-control technologies (a through e above), important fundamental mechanism governing mercury kinetics in flue gas were elucidated. For example, it was shown, for the range of chlorine and unburned-carbon (UBC) concentrations in coal-fired utilities, that chlorine has much less effect on mercury oxidation and removal than UBC in the flyash. Unburned carbon enhances mercury oxidation in the flue gas by reacting with HCl to form chlorinated-carbon sites, which then react with elemental mercury to form mercuric chloride, which subsequently desorbs back into the flue gas. Calcium was found to enhance mercury removal by stabilizing the oxidized mercury formed on carbon surfaces. Finally, a model was developed to describe these mercury adsorption, desorption, oxidation, and removal mechanisms, including

  7. Agent Based Modeling of Human Gut Microbiome Interactions and Perturbations

    PubMed Central

    Shashkova, Tatiana; Popenko, Anna; Tyakht, Alexander; Peskov, Kirill; Kosinsky, Yuri; Bogolubsky, Lev; Raigorodskii, Andrei; Ischenko, Dmitry; Alexeev, Dmitry; Govorun, Vadim

    2016-01-01

    Background Intestinal microbiota plays an important role in the human health. It is involved in the digestion and protects the host against external pathogens. Examination of the intestinal microbiome interactions is required for understanding of the community influence on host health. Studies of the microbiome can provide insight on methods of improving health, including specific clinical procedures for individual microbial community composition modification and microbiota correction by colonizing with new bacterial species or dietary changes. Methodology/Principal Findings In this work we report an agent-based model of interactions between two bacterial species and between species and the gut. The model is based on reactions describing bacterial fermentation of polysaccharides to acetate and propionate and fermentation of acetate to butyrate. Antibiotic treatment was chosen as disturbance factor and used to investigate stability of the system. System recovery after antibiotic treatment was analyzed as dependence on quantity of feedback interactions inside the community, therapy duration and amount of antibiotics. Bacterial species are known to mutate and acquire resistance to the antibiotics. The ability to mutate was considered to be a stochastic process, under this suggestion ratio of sensitive to resistant bacteria was calculated during antibiotic therapy and recovery. Conclusion/Significance The model confirms a hypothesis of feedbacks mechanisms necessity for providing functionality and stability of the system after disturbance. High fraction of bacterial community was shown to mutate during antibiotic treatment, though sensitive strains could become dominating after recovery. The recovery of sensitive strains is explained by fitness cost of the resistance. The model demonstrates not only quantitative dynamics of bacterial species, but also gives an ability to observe the emergent spatial structure and its alteration, depending on various feedback mechanisms

  8. OBIA based hierarchical image classification for industrial lake water.

    PubMed

    Uca Avci, Z D; Karaman, M; Ozelkan, E; Kumral, M; Budakoglu, M

    2014-07-15

    Water management is very important in water mining regions for the sustainability of the natural environment and for industrial activities. This study focused on Acigol Lake, which is an important wetland for sodium sulphate (Na2SO4) production, a significant natural protection area and habitat for local bird species and endemic species of this saline environment, and a stopover for migrating flamingos. By a hierarchical classification method, ponds representing the industrial part were classified according to in-situ measured Baumé values, and lake water representing the natural part was classified according to in-situ measurements of water depth. The latter is directly related to the water level, which should not exceed a critical level determined by the regulatory authorities. The resulting data, produced at an accuracy of around 80%, illustrates the status in two main regions for a single date. The output of the analysis may be meaningful for firms and environmental researchers, and authorizations can provide a good perspective for decision making for sustainable resource management in the region which has uncommon and specific ecological characteristics. PMID:24813772

  9. Orbital Roof Fractures: A Clinically Based Classification and Treatment Algorithm.

    PubMed

    Connon, Felicity Victoria; Austin, S J B; Nastri, A L

    2015-09-01

    Orbital roof fractures are relatively uncommon in craniofacial surgery but present a management challenge due to their anatomy and potential associated injuries. Currently, neither a classification system nor treatment algorithm exists for orbital roof fractures, which this article aims to provide. This article provides a literature review and clinical experience of a tertiary trauma center in Australia. All cases admitted to the Royal Melbourne Hospital with orbital roof fractures between January 2011 and July 2013 were reviewed regarding patient characteristics, mechanism, imaging (computed tomography), and management. Forty-seven patients with orbital roof fractures were treated. Three of these were isolated cases. Forty were male and seven were female. Assault (14) and falls (13) were the most common causes of injury. Forty-two patients were treated conservatively and five had orbital roof repairs. On the basis of the literature and local experience, we propose a four-point system, with subcategories allowing for different fracture characteristics to impact management. Despite the infrequency of orbital roof fractures, their potential ophthalmological, neurological, and functional sequelae can carry a significant morbidity. As such, an algorithm for management of orbital roof fractures may help to ensure appropriate and successful management of these patients. PMID:26269727

  10. Semi-automatic classification of glaciovolcanic landforms: An object-based mapping approach based on geomorphometry

    NASA Astrophysics Data System (ADS)

    Pedersen, G. B. M.

    2016-02-01

    A new object-oriented approach is developed to classify glaciovolcanic landforms (Procedure A) and their landform elements boundaries (Procedure B). It utilizes the principle that glaciovolcanic edifices are geomorphometrically distinct from lava shields and plains (Pedersen and Grosse, 2014), and the approach is tested on data from Reykjanes Peninsula, Iceland. The outlined procedures utilize slope and profile curvature attribute maps (20 m/pixel) and the classified results are evaluated quantitatively through error matrix maps (Procedure A) and visual inspection (Procedure B). In procedure A, the highest obtained accuracy is 94.1%, but even simple mapping procedures provide good results (> 90% accuracy). Successful classification of glaciovolcanic landform element boundaries (Procedure B) is also achieved and this technique has the potential to delineate the transition from intraglacial to subaerial volcanic activity in orthographic view. This object-oriented approach based on geomorphometry overcomes issues with vegetation cover, which has been typically problematic for classification schemes utilizing spectral data. Furthermore, it handles complex edifice outlines well and is easily incorporated into a GIS environment, where results can be edited or fused with other mapping results. The approach outlined here is designed to map glaciovolcanic edifices within the Icelandic neovolcanic zone but may also be applied to similar subaerial or submarine volcanic settings, where steep volcanic edifices are surrounded by flat plains.

  11. Demeter, persephone, and the search for emergence in agent-based models.

    SciTech Connect

    North, M. J.; Howe, T. R.; Collier, N. T.; Vos, J. R.; Decision and Information Sciences; Univ. of Chicago; PantaRei Corp.; Univ. of Illinois

    2006-01-01

    In Greek mythology, the earth goddess Demeter was unable to find her daughter Persephone after Persephone was abducted by Hades, the god of the underworld. Demeter is said to have embarked on a long and frustrating, but ultimately successful, search to find her daughter. Unfortunately, long and frustrating searches are not confined to Greek mythology. In modern times, agent-based modelers often face similar troubles when searching for agents that are to be to be connected to one another and when seeking appropriate target agents while defining agent behaviors. The result is a 'search for emergence' in that many emergent or potentially emergent behaviors in agent-based models of complex adaptive systems either implicitly or explicitly require search functions. This paper considers a new nested querying approach to simplifying such agent-based modeling and multi-agent simulation search problems.

  12. Intermittent observer-based consensus control for multi-agent systems with switching topologies

    NASA Astrophysics Data System (ADS)

    Xu, Xiaole; Gao, Lixin

    2016-06-01

    In this paper, we focus on the consensus problem for leaderless and leader-followers multi-agent systems with periodically intermittent control. The dynamics of each agent in the system is a linear system, and the interconnection topology among the agents is assumed to be switching. We assume that each agent can only share the outputs with its neighbours. Therefore, a class of distributed intermittent observer-based consensus protocols are proposed for each agent. First, in order to solve this problem, a parameter-dependent common Lyapunov function is constructed. Using this function, we prove that all agents can access a prescribed value, under the designed intermittent controller and observer, if there are suitable conditions on communication. Second, based on the investigation of the leader-following consensus problem, we design a new distributed intermittent observer-based protocol for each following agent. Finally, we provide an illustrative example to verify the effectiveness of the proposed approach.

  13. Confidence and the stock market: an agent-based approach.

    PubMed

    Bertella, Mario A; Pires, Felipe R; Feng, Ling; Stanley, Harry Eugene

    2014-01-01

    Using a behavioral finance approach we study the impact of behavioral bias. We construct an artificial market consisting of fundamentalists and chartists to model the decision-making process of various agents. The agents differ in their strategies for evaluating stock prices, and exhibit differing memory lengths and confidence levels. When we increase the heterogeneity of the strategies used by the agents, in particular the memory lengths, we observe excess volatility and kurtosis, in agreement with real market fluctuations--indicating that agents in real-world financial markets exhibit widely differing memory lengths. We incorporate the behavioral traits of adaptive confidence and observe a positive correlation between average confidence and return rate, indicating that market sentiment is an important driver in price fluctuations. The introduction of market confidence increases price volatility, reflecting the negative effect of irrationality in market behavior. PMID:24421888

  14. Confidence and the Stock Market: An Agent-Based Approach

    PubMed Central

    Bertella, Mario A.; Pires, Felipe R.; Feng, Ling; Stanley, Harry Eugene

    2014-01-01

    Using a behavioral finance approach we study the impact of behavioral bias. We construct an artificial market consisting of fundamentalists and chartists to model the decision-making process of various agents. The agents differ in their strategies for evaluating stock prices, and exhibit differing memory lengths and confidence levels. When we increase the heterogeneity of the strategies used by the agents, in particular the memory lengths, we observe excess volatility and kurtosis, in agreement with real market fluctuations—indicating that agents in real-world financial markets exhibit widely differing memory lengths. We incorporate the behavioral traits of adaptive confidence and observe a positive correlation between average confidence and return rate, indicating that market sentiment is an important driver in price fluctuations. The introduction of market confidence increases price volatility, reflecting the negative effect of irrationality in market behavior. PMID:24421888

  15. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  16. Novel Strength Test Battery to Permit Evidence-Based Paralympic Classification

    PubMed Central

    Beckman, Emma M.; Newcombe, Peter; Vanlandewijck, Yves; Connick, Mark J.; Tweedy, Sean M.

    2014-01-01

    Abstract Ordinal-scale strength assessment methods currently used in Paralympic athletics classification prevent the development of evidence-based classification systems. This study evaluated a battery of 7, ratio-scale, isometric tests with the aim of facilitating the development of evidence-based methods of classification. This study aimed to report sex-specific normal performance ranges, evaluate test–retest reliability, and evaluate the relationship between the measures and body mass. Body mass and strength measures were obtained from 118 participants—63 males and 55 females—ages 23.2 years ± 3.7 (mean ± SD). Seventeen participants completed the battery twice to evaluate test–retest reliability. The body mass–strength relationship was evaluated using Pearson correlations and allometric exponents. Conventional patterns of force production were observed. Reliability was acceptable (mean intraclass correlation = 0.85). Eight measures had moderate significant correlations with body size (r = 0.30–61). Allometric exponents were higher in males than in females (mean 0.99 vs 0.30). Results indicate that this comprehensive and parsimonious battery is an important methodological advance because it has psychometric properties critical for the development of evidence-based classification. Measures were interrelated with body size, indicating further research is required to determine whether raw measures require normalization in order to be validly applied in classification. PMID:25068950

  17. Endotracheal intubation confirmation based on video image classification using a parallel GMMs framework: a preliminary evaluation.

    PubMed

    Lederman, Dror

    2011-01-01

    In this paper, the problem of endotracheal intubation confirmation is addressed. Endotracheal intubation is a complex procedure which requires high skills and the use of secondary confirmation devices to ensure correct positioning of the tube. A novel confirmation approach, based on video images classification, is introduced. The approach is based on identification of specific anatomical landmarks, including esophagus, upper trachea and main bifurcation of the trachea into the two primary bronchi ("carina"), as indicators of correct or incorrect tube insertion and positioning. Classification of the images is performed using a parallel Gaussian mixture models (GMMs) framework, which is composed of several GMMs, schematically connected in parallel, where each GMM represents a different imaging angle. The performance of the proposed approach was evaluated using a dataset of cow-intubation videos and a dataset of human-intubation videos. Each one of the video images was manually (visually) classified by a medical expert into one of three categories: upper-tracheal intubation, correct (carina) intubation, and esophageal intubation. The image classification algorithm was applied off-line using a leave-one-case-out method. The results show that the system correctly classified 1517 out of 1600 (94.8%) of the cow-intubation images, and 340 out of the 358 human images (95.0%). The classification results compared favorably with a "standard" GMM approach utilizing textural based features, as well as with a state-of-the-art classification method, tested on the cow-intubation dataset. PMID:20878236

  18. Object-based classification of residential land use within Accra, Ghana based on QuickBird satellite data

    PubMed Central

    STOW, D.; LOPEZ, A.; LIPPITT, C.; HINTON, S.; WEEKS, J.

    2009-01-01

    A segmentation and hierarchical classification approach applied to QuickBird multispectral satellite data was implemented, with the goal of delineating residential land use polygons and identifying low and high socio-economic status of neighbourhoods within Accra, Ghana. Two types of object-based classification strategies were tested, one based on spatial frequency characteristics of multispectral data, and the other based on proportions of Vegetation–Impervious–Soil sub-objects. Both approaches yielded residential land-use maps with similar overall percentage accuracy (75%) and kappa index of agreement (0.62) values, based on test objects from visual interpretation of QuickBird panchromatic imagery. PMID:19424445

  19. A K-Means Shape Classification Algorithm Using Shock Graph-Based Edit Distance

    NASA Astrophysics Data System (ADS)

    Khanam, Solima; Jang, Seok-Woo; Paik, Woojin

    Skeleton is a very important feature for shape-based image classification. In this paper, we apply the discrete shock graph-based skeleton features to classify shapes into predefined groups, using a k-means clustering algorithm. The graph edit cost obtained by transforming database image graph into the respected query graph, will be used as distance function for the k-means clustering. To verify the performance of the suggested algorithm, we tested it on MPEG-7 dataset and our algorithm shows excellent performance for shape classification.

  20. Early detection of Alzheimer's disease using histograms in a dissimilarity-based classification framework

    NASA Astrophysics Data System (ADS)

    Luchtenberg, Anne; Simões, Rita; van Cappellen van Walsum, Anne-Marie; Slump, Cornelis H.

    2014-03-01

    Classification methods have been proposed to detect early-stage Alzheimer's disease using Magnetic Resonance images. In particular, dissimilarity-based classification has been applied using a deformation-based distance measure. However, such approach is not only computationally expensive but it also considers large-scale alterations in the brain only. In this work, we propose the use of image histogram distance measures, determined both globally and locally, to detect very mild to mild Alzheimer's disease. Using an ensemble of local patches over the entire brain, we obtain an accuracy of 84% (sensitivity 80% and specificity 88%).

  1. Semantic classification of diseases in discharge summaries using a context-aware rule-based classifier.

    PubMed

    Solt, Illés; Tikk, Domonkos; Gál, Viktor; Kardkovács, Zsolt T

    2009-01-01

    OBJECTIVE Automated and disease-specific classification of textual clinical discharge summaries is of great importance in human life science, as it helps physicians to make medical studies by providing statistically relevant data for analysis. This can be further facilitated if, at the labeling of discharge summaries, semantic labels are also extracted from text, such as whether a given disease is present, absent, questionable in a patient, or is unmentioned in the document. The authors present a classification technique that successfully solves the semantic classification task. DESIGN The authors introduce a context-aware rule-based semantic classification technique for use on clinical discharge summaries. The classification is performed in subsequent steps. First, some misleading parts are removed from the text; then the text is partitioned into positive, negative, and uncertain context segments, then a sequence of binary classifiers is applied to assign the appropriate semantic labels. Measurement For evaluation the authors used the documents of the i2b2 Obesity Challenge and adopted its evaluation measures: F(1)-macro and F(1)-micro for measurements. RESULTS On the two subtasks of the Obesity Challenge (textual and intuitive classification) the system performed very well, and achieved a F(1)-macro = 0.80 for the textual and F(1)-macro = 0.67 for the intuitive tasks, and obtained second place at the textual and first place at the intuitive subtasks of the challenge. CONCLUSIONS The authors show in the paper that a simple rule-based classifier can tackle the semantic classification task more successfully than machine learning techniques, if the training data are limited and some semantic labels are very sparse. PMID:19390101

  2. Optimizing Object-Based Classification in Urban Environments Using Very High Resolution GEOEYE-1 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Vicente, R.; Aguilar, F. J.; Fernández, A.; Saldaña, M. M.

    2012-07-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In fact, one of the most common applications of remote sensing images is the extraction of land cover information for digital image base maps by means of classification techniques. When VHR satellite images are used, an object-based classification strategy can potentially improve classification accuracy compared to pixel based classification. The aim of this work is to carry out an accuracy assessment test on the classification accuracy in urban environments using pansharpened and panchromatic GeoEye-1 orthoimages. In this work, the influence on object-based supervised classification accuracy is evaluated with regard to the sets of image object (IO) features used for classification of the land cover classes selected. For the classification phase the nearest neighbour classifier and the eCognition v. 8 software were used, using seven sets of IO features, including texture, geometry and the principal layer values features. The IOs were attained by eCognition using a multiresolution segmentation approach that is a bottom-up regionmerging technique starting with one-pixel. Four different sets or repetitions of training samples, always representing a 10% for each classes were extracted from IOs while the remaining objects were used for accuracy validation. A statistical test was carried out in order to strengthen the conclusions. An overall accuracy of 79.4% was attained with the panchromatic, red, blue, green and near infrared (NIR) bands from the panchromatic and pansharpened orthoimages, the brightness computed for the red, blue, green and infrared bands, the Maximum Difference, a mean of soil-adjusted vegetation index (SAVI), and, finally the normalized Digital Surface Model or Object Model (nDSM), computed from LiDAR data. For buildings classification, nDSM was the most important feature attaining producer and user

  3. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    SciTech Connect

    Sukumar, Sreenivas R; Nutaro, James J

    2012-01-01

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigm to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.

  4. Random Forest Classification of Depression Status Based On Subcortical Brain Morphometry Following Electroconvulsive Therapy

    PubMed Central

    Wade, Benjamin S.C.; Joshi, Shantanu H.; Pirnia, Tara; Leaver, Amber M.; Woods, Roger P.; Thompson, Paul M.; Espinoza, Randall; Narr, Katherine L.

    2015-01-01

    Disorders of the central nervous system are often accompanied by brain abnormalities detectable with MRI. Advances in biomedical imaging and pattern detection algorithms have led to classification methods that may help diagnose and track the progression of a brain disorder and/or predict successful response to treatment. These classification systems often use high-dimensional signals or images, and must handle the computational challenges of high dimensionality as well as complex data types such as shape descriptors. Here, we used shape information from subcortical structures to test a recently developed feature-selection method based on regularized random forests to 1) classify depressed subjects versus controls, and 2) patients before and after treatment with electroconvulsive therapy. We subsequently compared the classification performance of high-dimensional shape features with traditional volumetric measures. Shape-based models outperformed simple volumetric predictors in several cases, highlighting their utility as potential automated alternatives for establishing diagnosis and predicting treatment response. PMID:26413200

  5. Protein Classification Based on Analysis of Local Sequence-Structure Correspondence

    SciTech Connect

    Zemla, A T

    2006-02-13

    The goal of this project was to develop an algorithm to detect and calculate common structural motifs in compared structures, and define a set of numerical criteria to be used for fully automated motif based protein structure classification. The Protein Data Bank (PDB) contains more than 33,000 experimentally solved protein structures, and the Structural Classification of Proteins (SCOP) database, a manual classification of these structures, cannot keep pace with the rapid growth of the PDB. In our approach called STRALCP (STRucture Alignment based Clustering of Proteins), we generate detailed information about global and local similarities between given set of structures, identify similar fragments that are conserved within analyzed proteins, and use these conserved regions (detected structural motifs) to classify proteins.

  6. Human motion classification based on a textile integrated and wearable sensor array.

    PubMed

    Teichmann, D; Kuhn, A; Leonhardt, S; Walter, M

    2013-09-01

    A system for classification of motion patterns is presented based on a non-contact magnetic induction monitoring device. This device is textile integrated, wearable, and able to measure pulse and respiratory activity. The proposed classifiers are a neural network, support vector machine, and a decision tree algorithm generated by bootstrap aggregating. Their performance is compared using a data set comprising five different types of motion patterns. In addition, the dependence of the misclassification error on the input sample length is investigated. The features used for classification were based on information derived by discrete wavelet transform and on lower and higher order statistical measures. With the presented magnetic induction device, all tested classifiers were able to classify the defined motion pattern with an accuracy of over 93%. The proposed bootstrap aggregating decision tree algorithm produces the best classification performance (accuracy of 96%). The support vector machine classifier shows the least dependence on the sample length. PMID:23945071

  7. A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data

    NASA Astrophysics Data System (ADS)

    Gajda, Agnieszka; Wójtowicz-Nowakowska, Anna

    2013-04-01

    A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data Land cover maps are generally produced on the basis of high resolution imagery. Recently, LiDAR (Light Detection and Ranging) data have been brought into use in diverse applications including land cover mapping. In this study we attempted to assess the accuracy of land cover classification using both high resolution aerial imagery and LiDAR data (airborne laser scanning, ALS), testing two classification approaches: a pixel-based classification and object-oriented image analysis (OBIA). The study was conducted on three test areas (3 km2 each) in the administrative area of Kraków, Poland, along the course of the Vistula River. They represent three different dominating land cover types of the Vistula River valley. Test site 1 had a semi-natural vegetation, with riparian forests and shrubs, test site 2 represented a densely built-up area, and test site 3 was an industrial site. Point clouds from ALS and ortophotomaps were both captured in November 2007. Point cloud density was on average 16 pt/m2 and it contained additional information about intensity and encoded RGB values. Ortophotomaps had a spatial resolution of 10 cm. From point clouds two raster maps were generated: intensity (1) and (2) normalised Digital Surface Model (nDSM), both with the spatial resolution of 50 cm. To classify the aerial data, a supervised classification approach was selected. Pixel based classification was carried out in ERDAS Imagine software. Ortophotomaps and intensity and nDSM rasters were used in classification. 15 homogenous training areas representing each cover class were chosen. Classified pixels were clumped to avoid salt and pepper effect. Object oriented image object classification was carried out in eCognition software, which implements both the optical and ALS data. Elevation layers (intensity, firs/last reflection, etc.) were used at segmentation stage due to

  8. [Automatic classification method of star spectra data based on manifold fuzzy twin support vector machine].

    PubMed

    Liu, Zhong-bao; Gao, Yan-yun; Wang, Jian-zhen

    2015-01-01

    Support vector machine (SVM) with good leaning ability and generalization is widely used in the star spectra data classification. But when the scale of data becomes larger, the shortages of SVM appear: the calculation amount is quite large and the classification speed is too slow. In order to solve the above problems, twin support vector machine (TWSVM) was proposed by Jayadeva. The advantage of TSVM is that the time cost is reduced to 1/4 of that of SVM. While all the methods mentioned above only focus on the global characteristics and neglect the local characteristics. In view of this, an automatic classification method of star spectra data based on manifold fuzzy twin support vector machine (MF-TSVM) is proposed in this paper. In MF-TSVM, manifold-based discriminant analysis (MDA) is used to obtain the global and local characteristics of the input data and the fuzzy membership is introduced to reduce the influences of noise and singular data on the classification results. Comparative experiments with current classification methods, such as C-SVM and KNN, on the SDSS star spectra datasets verify the effectiveness of the proposed method. PMID:25993861

  9. Misclassification Errors in Unsupervised Classification Methods. Comparison Based on the Simulation of Targeted Proteomics Data

    PubMed Central

    Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M

    2016-01-01

    Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871

  10. A novel multi-manifold classification model via path-based clustering for image retrieval

    NASA Astrophysics Data System (ADS)

    Zhu, Rong; Yuan, Zhijun; Xuan, Junying

    2011-12-01

    Nowadays, with digital cameras and mass storage devices becoming increasingly affordable, each day thousands of pictures are taken and images on the Internet are emerged at an astonishing rate. Image retrieval is a process of searching valuable information that user demanded from huge images. However, it is hard to find satisfied results due to the well known "semantic gap". Image classification plays an essential role in retrieval process. But traditional methods will encounter problems when dealing with high-dimensional and large-scale image sets in applications. Here, we propose a novel multi-manifold classification model for image retrieval. Firstly, we simplify the classification of images from high-dimensional space into the one on low-dimensional manifolds, largely reducing the complexity of classification process. Secondly, considering that traditional distance measures often fail to find correct visual semantics of manifolds, especially when dealing with the images having complex data distribution, we also define two new distance measures based on path-based clustering, and further applied to the construction of a multi-class image manifold. One experiment was conducted on 2890 Web images. The comparison results between three methods show that the proposed method achieves the highest classification accuracy.

  11. Classification of the micro and nanoparticles and biological agents by neural network analysis of the parameters of optical resonance of whispering gallery mode in dielectric microspheres

    NASA Astrophysics Data System (ADS)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Schweiger, Gustav; Ostendorf, Andreas

    2011-07-01

    A novel technique for the label-free analysis of micro and nanoparticles including biomolecules using optical micro cavity resonance of whispering-gallery-type modes is being developed. Various schemes of the method using both standard and specially produced microspheres have been investigated to make further development for microbial application. It was demonstrated that optical resonance under optimal geometry could be detected under the laser power of less 1 microwatt. The sensitivity of developed schemes has been tested by monitoring the spectral shift of the whispering gallery modes. Water solutions of ethanol, ascorbic acid, blood phantoms including albumin and HCl, glucose, biotin, biomarker like C reactive protein so as bacteria and virus phantoms (gels of silica micro and nanoparticles) have been used. Structure of resonance spectra of the solutions was a specific subject of investigation. Probabilistic neural network classifier for biological agents and micro/nano particles classification has been developed. Several parameters of resonance spectra as spectral shift, broadening, diffuseness and others have been used as input parameters to develop a network classifier for micro and nanoparticles and biological agents in solution. Classification probability of approximately 98% for probes under investigation have been achieved. Developed approach have been demonstrated to be a promising technology platform for sensitive, lab-on-chip type sensor which can be used for development of diagnostic tools for different biological molecules, e.g. proteins, oligonucleotides, oligosaccharides, lipids, small molecules, viral particles, cells as well as in different experimental contexts e.g. proteomics, genomics, drug discovery, and membrane studies.

  12. Biomedical literature classification using encyclopedic knowledge: a Wikipedia-based bag-of-concepts approach.

    PubMed

    Mouriño García, Marcos Antonio; Pérez Rodríguez, Roberto; Anido Rifón, Luis E

    2015-01-01

    Automatic classification of text documents into a set of categories has a lot of applications. Among those applications, the automatic classification of biomedical literature stands out as an important application for automatic document classification strategies. Biomedical staff and researchers have to deal with a lot of literature in their daily activities, so it would be useful a system that allows for accessing to documents of interest in a simple and effective way; thus, it is necessary that these documents are sorted based on some criteria-that is to say, they have to be classified. Documents to classify are usually represented following the bag-of-words (BoW) paradigm. Features are words in the text-thus suffering from synonymy and polysemy-and their weights are just based on their frequency of occurrence. This paper presents an empirical study of the efficiency of a classifier that leverages encyclopedic background knowledge-concretely Wikipedia-in order to create bag-of-concepts (BoC) representations of documents, understanding concept as "unit of meaning", and thus tackling synonymy and polysemy. Besides, the weighting of concepts is based on their semantic relevance in the text. For the evaluation of the proposal, empirical experiments have been conducted with one of the commonly used corpora for evaluating classification and retrieval of biomedical information, OHSUMED, and also with a purpose-built corpus of MEDLINE biomedical abstracts, UVigoMED. Results obtained show that the Wikipedia-based bag-of-concepts representation outperforms the classical bag-of-words representation up to 157% in the single-label classification problem and up to 100% in the multi-label problem for OHSUMED corpus, and up to 122% in the single-label classification problem and up to 155% in the multi-label problem for UVigoMED corpus. PMID:26468436

  13. Biomedical literature classification using encyclopedic knowledge: a Wikipedia-based bag-of-concepts approach

    PubMed Central

    Pérez Rodríguez, Roberto; Anido Rifón, Luis E.

    2015-01-01

    Automatic classification of text documents into a set of categories has a lot of applications. Among those applications, the automatic classification of biomedical literature stands out as an important application for automatic document classification strategies. Biomedical staff and researchers have to deal with a lot of literature in their daily activities, so it would be useful a system that allows for accessing to documents of interest in a simple and effective way; thus, it is necessary that these documents are sorted based on some criteria—that is to say, they have to be classified. Documents to classify are usually represented following the bag-of-words (BoW) paradigm. Features are words in the text—thus suffering from synonymy and polysemy—and their weights are just based on their frequency of occurrence. This paper presents an empirical study of the efficiency of a classifier that leverages encyclopedic background knowledge—concretely Wikipedia—in order to create bag-of-concepts (BoC) representations of documents, understanding concept as “unit of meaning”, and thus tackling synonymy and polysemy. Besides, the weighting of concepts is based on their semantic relevance in the text. For the evaluation of the proposal, empirical experiments have been conducted with one of the commonly used corpora for evaluating classification and retrieval of biomedical information, OHSUMED, and also with a purpose-built corpus of MEDLINE biomedical abstracts, UVigoMED. Results obtained show that the Wikipedia-based bag-of-concepts representation outperforms the classical bag-of-words representation up to 157% in the single-label classification problem and up to 100% in the multi-label problem for OHSUMED corpus, and up to 122% in the single-label classification problem and up to 155% in the multi-label problem for UVigoMED corpus. PMID:26468436

  14. a Dimension Reduction-Based Method for Classification of Hyperspectral and LIDAR Data

    NASA Astrophysics Data System (ADS)

    Abbasi, B.; Arefi, H.; Bigdeli, B.

    2015-12-01

    The existence of various natural objects such as grass, trees, and rivers along with artificial manmade features such as buildings and roads, make it difficult to classify ground objects. Consequently using single data or simple classification approach cannot improve classification results in object identification. Also, using of a variety of data from different sensors; increase the accuracy of spatial and spectral information. In this paper, we proposed a classification algorithm on joint use of hyperspectral and Lidar (Light Detection and Ranging) data based on dimension reduction. First, some feature extraction techniques are applied to achieve more information from Lidar and hyperspectral data. Also Principal component analysis (PCA) and Minimum Noise Fraction (MNF) have been utilized to reduce the dimension of spectral features. The number of 30 features containing the most information of the hyperspectral images is considered for both PCA and MNF. In addition, Normalized Difference Vegetation Index (NDVI) has been measured to highlight the vegetation. Furthermore, the extracted features from Lidar data calculated based on relation between every pixel of data and surrounding pixels in local neighbourhood windows. The extracted features are based on the Grey Level Co-occurrence Matrix (GLCM) matrix. In second step, classification is operated in all features which obtained by MNF, PCA, NDVI and GLCM and trained by class samples. After this step, two classification maps are obtained by SVM classifier with MNF+NDVI+GLCM features and PCA+NDVI+GLCM features, respectively. Finally, the classified images are fused together to create final classification map by decision fusion based majority voting strategy.

  15. A new classification scheme of plastic wastes based upon recycling labels

    SciTech Connect

    Özkan, Kemal; Ergin, Semih; Işık, Şahin; Işıklı, İdil

    2015-01-15

    Highlights: • PET, HPDE or PP types of plastics are considered. • An automated classification of plastic bottles based on the feature extraction and classification methods is performed. • The decision mechanism consists of PCA, Kernel PCA, FLDA, SVD and Laplacian Eigenmaps methods. • SVM is selected to achieve the classification task and majority voting technique is used. - Abstract: Since recycling of materials is widely assumed to be environmentally and economically beneficial, reliable sorting and processing of waste packaging materials such as plastics is very important for recycling with high efficiency. An automated system that can quickly categorize these materials is certainly needed for obtaining maximum classification while maintaining high throughput. In this paper, first of all, the photographs of the plastic bottles have been taken and several preprocessing steps were carried out. The first preprocessing step is to extract the plastic area of a bottle from the background. Then, the morphological image operations are implemented. These operations are edge detection, noise removal, hole removing, image enhancement, and image segmentation. These morphological operations can be generally defined in terms of the combinations of erosion and dilation. The effect of bottle color as well as label are eliminated using these operations. Secondly, the pixel-wise intensity values of the plastic bottle images have been used together with the most popular subspace and statistical feature extraction methods to construct the feature vectors in this study. Only three types of plastics are considered due to higher existence ratio of them than the other plastic types in the world. The decision mechanism consists of five different feature extraction methods including as Principal Component Analysis (PCA), Kernel PCA (KPCA), Fisher’s Linear Discriminant Analysis (FLDA), Singular Value Decomposition (SVD) and Laplacian Eigenmaps (LEMAP) and uses a simple

  16. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    NASA Astrophysics Data System (ADS)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  17. A Hybrid Sensitivity Analysis Approach for Agent-based Disease Spread Models

    SciTech Connect

    Pullum, Laura L; Cui, Xiaohui

    2012-01-01

    Agent-based models (ABM) have been widely deployed in different fields for studying the collective behavior of large numbers of interacting agents. Of particular interest lately is the application of agent-based and hybrid models to epidemiology, specifically Agent-based Disease Spread Models (ABDSM). Validation (one aspect of the means to achieve dependability) of ABDSM simulation models is extremely important. It ensures that the right model has been built and lends confidence to the use of that model to inform critical decisions. In this report, we describe our preliminary efforts in ABDSM validation by using hybrid model fusion technology.

  18. Graphene-based contrast agents for photoacoustic and thermoacoustic tomography☆

    PubMed Central

    Lalwani, Gaurav; Cai, Xin; Nie, Liming; Wang, Lihong V.; Sitharaman, Balaji

    2013-01-01

    In this work, graphene nanoribbons and nanoplatelets were investigated as contrast agents for photoacoustic and thermoacoustic tomography (PAT and TAT). We show that oxidized single- and multi-walled graphene oxide nanoribbons (O-SWGNRs, O-MWGNRs) exhibit approximately 5–10 fold signal enhancement for PAT in comparison to blood at the wavelength of 755 nm, and approximately 10–28% signal enhancement for TAT in comparison to deionized (DI) water at 3 GHz. Oxidized graphite microparticles (O-GMPs) and exfoliated graphene oxide nanoplatelets (O-GNPs) show no significant signal enhancement for PAT, and approximately 12–29% signal enhancement for TAT. These results indicate that O-GNRs show promise as multi-modal PAT and TAT contrast agents, and that O-GNPs are suitable contrast agents for TAT. PMID:24490141

  19. The Architecture of Information Fusion System Ingreenhouse Wireless Sensor Network Based on Multi-Agent

    NASA Astrophysics Data System (ADS)

    Zhu, Wenting; Chen, Ming

    In view of current unprogressive situation of factory breeding in aquaculture, this article designed a standardized, informationized and intelligentized aquaculture system, proposed a information fusion architecture based on multi-agent in greenhouse wireless sensor network (GWSN), and researched mainly the structural characteristic of the four-classed information fusion based on distributed multi-agent and the method to construct the structure inside of every agent.

  20. B-tree search reinforcement learning for model based intelligent agent

    NASA Astrophysics Data System (ADS)

    Bhuvaneswari, S.; Vignashwaran, R.

    2013-03-01

    Agents trained by learning techniques provide a powerful approximation of active solutions for naive approaches. In this study using B - Trees implying reinforced learning the data search for information retrieval is moderated to achieve accuracy with minimum search time. The impact of variables and tactics applied in training are determined using reinforcement learning. Agents based on these techniques perform satisfactory baseline and act as finite agents based on the predetermined model against competitors from the course.

  1. An Agent-Based Approach to Care in Independent Living

    NASA Astrophysics Data System (ADS)

    Kaluža, Boštjan; Mirchevska, Violeta; Dovgan, Erik; Luštrek, Mitja; Gams, Matjaž

    This paper presents a multi-agent system for the care of elderly people living at home on their own, with the aim to prolong their independence. The system is composed of seven groups of agents providing a reliable, robust and flexible monitoring by sensing the user in the environment, reconstructing the position and posture to create the physical awareness of the user in the environment, reacting to critical situations, calling for help in the case of an emergency, and issuing warnings if unusual behavior is detected. The system has been tested during several on-line demonstrations.

  2. A minimum spanning forest based classification method for dedicated breast CT images

    SciTech Connect

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei

    2015-11-15

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting model used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging.

  3. A new theory-based social classification in Japan and its validation using historically collected information.

    PubMed

    Hiyoshi, Ayako; Fukuda, Yoshiharu; Shipley, Martin J; Bartley, Mel; Brunner, Eric J

    2013-06-01

    Studies of health inequalities in Japan have increased since the millennium. However, there remains a lack of an accepted theory-based classification to measure occupation-related social position for Japan. This study attempts to derive such a classification based on the National Statistics Socio-economic Classification in the UK. Using routinely collected data from the nationally representative Comprehensive Survey of the Living Conditions of People on Health and Welfare, the Japanese Socioeconomic Classification was derived using two variables - occupational group and employment status. Validation analyses were conducted using household income, home ownership, self-rated good or poor health, and Kessler 6 psychological distress (n ≈ 36,000). After adjustment for age, marital status, and area (prefecture), one step lower social class was associated with mean 16% (p < 0.001) lower income, and a risk ratio of 0.93 (p < 0.001) for home ownership. The probability of good health showed a trend in men and women (risk ratio 0.94 and 0.93, respectively, for one step lower social class, p < 0.001). The trend for poor health was significant in women (odds ratio 1.12, p < 0.001) but not in men. Kessler 6 psychological distress showed significant trends in men (risk ratio 1.03, p = 0.044) and in women (1.05, p = 0.004). We propose the Japanese Socioeconomic Classification, derived from basic occupational and employment status information, as a meaningful, theory-based and standard classification system suitable for monitoring occupation-related health inequalities in Japan. PMID:23631782

  4. Classification of agents using Syrian hamster embryo (SHE) cell transformation assay (CTA) with ATR-FTIR spectroscopy and multivariate analysis.

    PubMed

    Ahmadzai, Abdullah A; Trevisan, Júlio; Pang, Weiyi; Riding, Matthew J; Strong, Rebecca J; Llabjani, Valon; Pant, Kamala; Carmichael, Paul L; Scott, Andrew D; Martin, Francis L

    2015-09-01

    The Syrian hamster embryo (SHE) cell transformation assay (pH 6.7) has a reported sensitivity of 87% and specificity of 83%, and an overall concordance of 85% with in vivo rodent bioassay data. To date, the SHE assay is the only in vitro assay that exhibits multistage carcinogenicity. The assay uses morphological transformation, the first stage towards neoplasm, as an endpoint to predict the carcinogenic potential of a test agent. However, scoring of morphologically transformed SHE cells is subjective. We treated SHE cells grown on low-E reflective slides with 2,6-diaminotoluene, N-nitroso-N-ethylnitroguanidine, N-nitroso-N-methylurea, N-nitroso-N-ethylurea, EDTA, dimethyl sulphoxide (DMSO; vehicle control), methyl methanesulfonate, benzo[e]pyrene, mitomycin C, ethyl methanesulfonate, ampicillin or five different concentrations of benzo[a]pyrene. Macroscopically visible SHE colonies were located on the slides and interrogated using attenuated total reflection Fourier-transform infrared (ATR-FTIR) spectroscopy acquiring five spectra per colony. The acquired IR data were analysed using Fisher's linear discriminant analysis (LDA) followed by principal component analysis (PCA)-LDA cluster vectors to extract major and minor discriminating wavenumbers for each treatment class. Each test agent vs. DMSO and treatment-induced transformed cells vs. corresponding non-transformed were classified by a unique combination of major and minor discriminating wavenumbers. Alterations associated with Amide I, Amide II, lipids and nucleic acids appear to be important in segregation of classes. Our findings suggest that a biophysical approach of ATR-FTIR spectroscopy with multivariate analysis could facilitate a more objective interrogation of SHE cells towards scoring for transformation and ultimately employing the assay for risk assessment of test agents. PMID:25925069

  5. Passive polarimetric imagery-based material classification robust to illumination source position and viewpoint.

    PubMed

    Thilak Krishna, Thilakam Vimal; Creusere, Charles D; Voelz, David G

    2011-01-01

    Polarization, a property of light that conveys information about the transverse electric field orientation, complements other attributes of electromagnetic radiation such as intensity and frequency. Using multiple passive polarimetric images, we develop an iterative, model-based approach to estimate the complex index of refraction and apply it to target classification. PMID:20542767

  6. Multiple Sclerosis and Employment: A Research Review Based on the International Classification of Function

    ERIC Educational Resources Information Center

    Frain, Michael P.; Bishop, Malachy; Rumrill, Phillip D., Jr.; Chan, Fong; Tansey, Timothy N.; Strauser, David; Chiu, Chung-Yi

    2015-01-01

    Multiple sclerosis (MS) is an unpredictable, sometimes progressive chronic illness affecting people in the prime of their working lives. This article reviews the effects of MS on employment based on the World Health Organization's International Classification of Functioning, Disability and Health model. Correlations between employment and…

  7. 7 CFR 27.36 - Classification and Micronaire determinations based on official standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Classification and Micronaire determinations based on official standards. 27.36 Section 27.36 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD...

  8. 7 CFR 27.36 - Classification determinations based on official standards.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Classification determinations based on official standards. 27.36 Section 27.36 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS...

  9. New echocardiography-based classification of mitral valve pathology: relevance to surgical valve repair.

    PubMed

    Shah, Pravin M; Raney, Aidan A

    2012-01-01

    A new echocardiography-based classification of mitral valve pathology is proposed, the adoption of which may provide a uniform approach to the assessment of individual cases by the cardiologist, cardiac anesthesiologist, and surgeon. This type of approach may facilitate the planning and execution of valve repair techniques, with higher rates of success than are currently reported. PMID:22474740

  10. 8 CFR 204.306 - Classification as an immediate relative based on a Convention adoption.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....306 Classification as an immediate relative based on a Convention adoption. (a) Unless 8 CFR 204.309... process: (1) First, the U.S. citizen must file a Form I-800A under 8 CFR 204.310; (2) Then, once USCIS has... adoptee, the U.S. citizen must file a Form I-800 under 8 CFR 204.313....

  11. A CLASSIFICATION OF U.S. ESTUARIES BASED ON PHYSICAL, HYDROLOGIC ATTRIBUTES

    EPA Science Inventory

    A classification of U.S. estuaries is presented based on estuarine characteristics that have been identified as important for quantifying stressor-response

    relationships in coastal systems. Estuaries within a class have similar physical/hydrologic and land use characteris...

  12. Multi-class SVM model for fMRI-based classification and grading of liver fibrosis

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Sela, Y.; Edrei, Y.; Pappo, O.; Joskowicz, L.; Abramovitch, R.

    2010-03-01

    We present a novel non-invasive automatic method for the classification and grading of liver fibrosis from fMRI maps based on hepatic hemodynamic changes. This method automatically creates a model for liver fibrosis grading based on training datasets. Our supervised learning method evaluates hepatic hemodynamics from an anatomical MRI image and three T2*-W fMRI signal intensity time-course scans acquired during the breathing of air, air-carbon dioxide, and carbogen. It constructs a statistical model of liver fibrosis from these fMRI scans using a binary-based one-against-all multi class Support Vector Machine (SVM) classifier. We evaluated the resulting classification model with the leave-one out technique and compared it to both full multi-class SVM and K-Nearest Neighbor (KNN) classifications. Our experimental study analyzed 57 slice sets from 13 mice, and yielded a 98.2% separation accuracy between healthy and low grade fibrotic subjects, and an overall accuracy of 84.2% for fibrosis grading. These results are better than the existing image-based methods which can only discriminate between healthy and high grade fibrosis subjects. With appropriate extensions, our method may be used for non-invasive classification and progression monitoring of liver fibrosis in human patients instead of more invasive approaches, such as biopsy or contrast-enhanced imaging.

  13. A Game-Based Approach to Learning the Idea of Chemical Elements and Their Periodic Classification

    ERIC Educational Resources Information Center

    Franco-Mariscal, Antonio Joaquín; Oliva-Martínez, José María; Blanco-López, Ángel; España-Ramos, Enrique

    2016-01-01

    In this paper, the characteristics and results of a teaching unit based on the use of educational games to learn the idea of chemical elements and their periodic classification in secondary education are analyzed. The method is aimed at Spanish students aged 15-16 and consists of 24 1-h sessions. The results obtained on implementing the teaching…

  14. 8 CFR 204.306 - Classification as an immediate relative based on a Convention adoption.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....306 Classification as an immediate relative based on a Convention adoption. (a) Unless 8 CFR 204.309... process: (1) First, the U.S. citizen must file a Form I-800A under 8 CFR 204.310; (2) Then, once USCIS has... adoptee, the U.S. citizen must file a Form I-800 under 8 CFR 204.313....

  15. Computerized Classification Testing under the One-Parameter Logistic Response Model with Ability-Based Guessing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Huang, Sheng-Yun

    2011-01-01

    The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…

  16. Scene-Level Geographic Image Classification Based on a Covariance Descriptor Using Supervised Collaborative Kernel Coding.

    PubMed

    Yang, Chunwei; Liu, Huaping; Wang, Shicheng; Liao, Shouyi

    2016-01-01

    Scene-level geographic image classification has been a very challenging problem and has become a research focus in recent years. This paper develops a supervised collaborative kernel coding method based on a covariance descriptor (covd) for scene-level geographic image classification. First, covd is introduced in the feature extraction process and, then, is transformed to a Euclidean feature by a supervised collaborative kernel coding model. Furthermore, we develop an iterative optimization framework to solve this model. Comprehensive evaluations on public high-resolution aerial image dataset and comparisons with state-of-the-art methods show the superiority and effectiveness of our approach. PMID:26999150

  17. Gabor-wavelet decomposition and integrated PCA-FLD method for texture based defect classification

    NASA Astrophysics Data System (ADS)

    Cheng, Xuemei; Chen, Yud-Ren; Yang, Tao; Chen, Xin

    2005-11-01

    In many hyperspectral applications, it is desirable to extract the texture features for pattern classification. Texture refers to replications, symmetry of certain patterns. In a set of hyperspectral images, the differences of image textures often imply changes in the physical and chemical properties on or underneath the surface. In this paper, we utilize Gabor wavelet based texture analysis method for textural pattern extraction, and combined with integrated PCA-FLD method for hyperspectral band selection in the application of classifying chilling damaged cucumbers from normal ones. The classification performances are compared and analyzed.

  18. Morphology classification of galaxies in CL 0939+4713 using a ground-based telescope image

    NASA Technical Reports Server (NTRS)

    Fukugita, M.; Doi, M.; Dressler, A.; Gunn, J. E.

    1995-01-01

    Morphological classification is studied for galaxies in cluster CL 0939+4712 at z = 0.407 using simple photometric parameters obtained from a ground-based telescope image with seeing of 1-2 arcseconds full width at half maximim (FWHM). By ploting the galaxies in a plane of the concentration parameter versus mean surface brightness, we find a good correlation between the location on the plane and galaxy colors, which are known to correlate with morphological types from a recent Hubble Space Telescope (HST) study. Using the present method, we expect a success rate of classification into early and late types of about 70% or possibly more.

  19. The application of the Kohonen neural network in the nonparametric-quality-based classification of tomatoes

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Nowakowski, K.; Tomczak, R.; Kujawa, S.; Piekarska-Boniecka, H.

    2012-04-01

    By using the classification properties of Kohonen-type networks (Tipping 1996), a neural model was built for the qualitybased identification of tomatoes. The resulting empirical data in the form of digital images of tomatoes at various stages of storage were subsequently used to draw up a topological SOFM (Self-Organizing Feature Map) which features cluster centers of "comparable" cases (Tadeusiewicz 1997, Boniecki 2008). Radial neurons from the Kohonen topological map were labeled appropriately to allow for the practical quality-based classification of tomatoes (De Grano 2007).

  20. Scene-Level Geographic Image Classification Based on a Covariance Descriptor Using Supervised Collaborative Kernel Coding

    PubMed Central

    Yang, Chunwei; Liu, Huaping; Wang, Shicheng; Liao, Shouyi

    2016-01-01

    Scene-level geographic image classification has been a very challenging problem and has become a research focus in recent years. This paper develops a supervised collaborative kernel coding method based on a covariance descriptor (covd) for scene-level geographic image classification. First, covd is introduced in the feature extraction process and, then, is transformed to a Euclidean feature by a supervised collaborative kernel coding model. Furthermore, we develop an iterative optimization framework to solve this model. Comprehensive evaluations on public high-resolution aerial image dataset and comparisons with state-of-the-art methods show the superiority and effectiveness of our approach. PMID:26999150