Sample records for target selection algorithm

  1. A Novel Sensor Selection and Power Allocation Algorithm for Multiple-Target Tracking in an LPI Radar Network

    PubMed Central

    She, Ji; Wang, Fei; Zhou, Jianjiang

    2016-01-01

    Radar networks are proven to have numerous advantages over traditional monostatic and bistatic radar. With recent developments, radar networks have become an attractive platform due to their low probability of intercept (LPI) performance for target tracking. In this paper, a joint sensor selection and power allocation algorithm for multiple-target tracking in a radar network based on LPI is proposed. It is found that this algorithm can minimize the total transmitted power of a radar network on the basis of a predetermined mutual information (MI) threshold between the target impulse response and the reflected signal. The MI is required by the radar network system to estimate target parameters, and it can be calculated predictively with the estimation of target state. The optimization problem of sensor selection and power allocation, which contains two variables, is non-convex and it can be solved by separating power allocation problem from sensor selection problem. To be specific, the optimization problem of power allocation can be solved by using the bisection method for each sensor selection scheme. Also, the optimization problem of sensor selection can be solved by a lower complexity algorithm based on the allocated powers. According to the simulation results, it can be found that the proposed algorithm can effectively reduce the total transmitted power of a radar network, which can be conducive to improving LPI performance. PMID:28009819

  2. ProSelection: A Novel Algorithm to Select Proper Protein Structure Subsets for in Silico Target Identification and Drug Discovery Research.

    PubMed

    Wang, Nanyi; Wang, Lirong; Xie, Xiang-Qun

    2017-11-27

    Molecular docking is widely applied to computer-aided drug design and has become relatively mature in the recent decades. Application of docking in modeling varies from single lead compound optimization to large-scale virtual screening. The performance of molecular docking is highly dependent on the protein structures selected. It is especially challenging for large-scale target prediction research when multiple structures are available for a single target. Therefore, we have established ProSelection, a docking preferred-protein selection algorithm, in order to generate the proper structure subset(s). By the ProSelection algorithm, protein structures of "weak selectors" are filtered out whereas structures of "strong selectors" are kept. Specifically, the structure which has a good statistical performance of distinguishing active ligands from inactive ligands is defined as a strong selector. In this study, 249 protein structures of 14 autophagy-related targets are investigated. Surflex-dock was used as the docking engine to distinguish active and inactive compounds against these protein structures. Both t test and Mann-Whitney U test were used to distinguish the strong from the weak selectors based on the normality of the docking score distribution. The suggested docking score threshold for active ligands (SDA) was generated for each strong selector structure according to the receiver operating characteristic (ROC) curve. The performance of ProSelection was further validated by predicting the potential off-targets of 43 U.S. Federal Drug Administration approved small molecule antineoplastic drugs. Overall, ProSelection will accelerate the computational work in protein structure selection and could be a useful tool for molecular docking, target prediction, and protein-chemical database establishment research.

  3. Microsatellites as targets of natural selection.

    PubMed

    Haasl, Ryan J; Payseur, Bret A

    2013-02-01

    The ability to survey polymorphism on a genomic scale has enabled genome-wide scans for the targets of natural selection. Theory that connects patterns of genetic variation to evidence of natural selection most often assumes a diallelic locus and no recurrent mutation. Although these assumptions are suitable to selection that targets single nucleotide variants, fundamentally different types of mutation generate abundant polymorphism in genomes. Moreover, recent empirical results suggest that mutationally complex, multiallelic loci including microsatellites and copy number variants are sometimes targeted by natural selection. Given their abundance, the lack of inference methods tailored to the mutational peculiarities of these types of loci represents a notable gap in our ability to interrogate genomes for signatures of natural selection. Previous theoretical investigations of mutation-selection balance at multiallelic loci include assumptions that limit their application to inference from empirical data. Focusing on microsatellites, we assess the dynamics and population-level consequences of selection targeting mutationally complex variants. We develop general models of a multiallelic fitness surface, a realistic model of microsatellite mutation, and an efficient simulation algorithm. Using these tools, we explore mutation-selection-drift equilibrium at microsatellites and investigate the mutational history and selective regime of the microsatellite that causes Friedreich's ataxia. We characterize microsatellite selective events by their duration and cost, note similarities to sweeps from standing point variation, and conclude that it is premature to label microsatellites as ubiquitous agents of efficient adaptive change. Together, our models and simulation algorithm provide a powerful framework for statistical inference, which can be used to test the neutrality of microsatellites and other multiallelic variants.

  4. Microsatellites as Targets of Natural Selection

    PubMed Central

    Haasl, Ryan J.; Payseur, Bret A.

    2013-01-01

    The ability to survey polymorphism on a genomic scale has enabled genome-wide scans for the targets of natural selection. Theory that connects patterns of genetic variation to evidence of natural selection most often assumes a diallelic locus and no recurrent mutation. Although these assumptions are suitable to selection that targets single nucleotide variants, fundamentally different types of mutation generate abundant polymorphism in genomes. Moreover, recent empirical results suggest that mutationally complex, multiallelic loci including microsatellites and copy number variants are sometimes targeted by natural selection. Given their abundance, the lack of inference methods tailored to the mutational peculiarities of these types of loci represents a notable gap in our ability to interrogate genomes for signatures of natural selection. Previous theoretical investigations of mutation-selection balance at multiallelic loci include assumptions that limit their application to inference from empirical data. Focusing on microsatellites, we assess the dynamics and population-level consequences of selection targeting mutationally complex variants. We develop general models of a multiallelic fitness surface, a realistic model of microsatellite mutation, and an efficient simulation algorithm. Using these tools, we explore mutation-selection-drift equilibrium at microsatellites and investigate the mutational history and selective regime of the microsatellite that causes Friedreich’s ataxia. We characterize microsatellite selective events by their duration and cost, note similarities to sweeps from standing point variation, and conclude that it is premature to label microsatellites as ubiquitous agents of efficient adaptive change. Together, our models and simulation algorithm provide a powerful framework for statistical inference, which can be used to test the neutrality of microsatellites and other multiallelic variants. PMID:23104080

  5. Image-algebraic design of multispectral target recognition algorithms

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    1994-06-01

    In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.

  6. Revisiting negative selection algorithms.

    PubMed

    Ji, Zhou; Dasgupta, Dipankar

    2007-01-01

    This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.

  7. Transitioning from Targeted to Comprehensive Mass Spectrometry Using Genetic Algorithms.

    PubMed

    Jaffe, Jacob D; Feeney, Caitlin M; Patel, Jinal; Lu, Xiaodong; Mani, D R

    2016-11-01

    Targeted proteomic assays are becoming increasingly popular because of their robust quantitative applications enabled by internal standardization, and they can be routinely executed on high performance mass spectrometry instrumentation. However, these assays are typically limited to 100s of analytes per experiment. Considerable time and effort are often expended in obtaining and preparing samples prior to targeted analyses. It would be highly desirable to detect and quantify 1000s of analytes in such samples using comprehensive mass spectrometry techniques (e.g., SWATH and DIA) while retaining a high degree of quantitative rigor for analytes with matched internal standards. Experimentally, it is facile to port a targeted assay to a comprehensive data acquisition technique. However, data analysis challenges arise from this strategy concerning agreement of results from the targeted and comprehensive approaches. Here, we present the use of genetic algorithms to overcome these challenges in order to configure hybrid targeted/comprehensive MS assays. The genetic algorithms are used to select precursor-to-fragment transitions that maximize the agreement in quantification between the targeted and the comprehensive methods. We find that the algorithm we used provided across-the-board improvement in the quantitative agreement between the targeted assay data and the hybrid comprehensive/targeted assay that we developed, as measured by parameters of linear models fitted to the results. We also found that the algorithm could perform at least as well as an independently-trained mass spectrometrist in accomplishing this task. We hope that this approach will be a useful tool in the development of quantitative approaches for comprehensive proteomics techniques. Graphical Abstract ᅟ.

  8. Transitioning from Targeted to Comprehensive Mass Spectrometry Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Jaffe, Jacob D.; Feeney, Caitlin M.; Patel, Jinal; Lu, Xiaodong; Mani, D. R.

    2016-11-01

    Targeted proteomic assays are becoming increasingly popular because of their robust quantitative applications enabled by internal standardization, and they can be routinely executed on high performance mass spectrometry instrumentation. However, these assays are typically limited to 100s of analytes per experiment. Considerable time and effort are often expended in obtaining and preparing samples prior to targeted analyses. It would be highly desirable to detect and quantify 1000s of analytes in such samples using comprehensive mass spectrometry techniques (e.g., SWATH and DIA) while retaining a high degree of quantitative rigor for analytes with matched internal standards. Experimentally, it is facile to port a targeted assay to a comprehensive data acquisition technique. However, data analysis challenges arise from this strategy concerning agreement of results from the targeted and comprehensive approaches. Here, we present the use of genetic algorithms to overcome these challenges in order to configure hybrid targeted/comprehensive MS assays. The genetic algorithms are used to select precursor-to-fragment transitions that maximize the agreement in quantification between the targeted and the comprehensive methods. We find that the algorithm we used provided across-the-board improvement in the quantitative agreement between the targeted assay data and the hybrid comprehensive/targeted assay that we developed, as measured by parameters of linear models fitted to the results. We also found that the algorithm could perform at least as well as an independently-trained mass spectrometrist in accomplishing this task. We hope that this approach will be a useful tool in the development of quantitative approaches for comprehensive proteomics techniques.

  9. Priority target conditions for algorithms for monitoring children's growth: Interdisciplinary consensus.

    PubMed

    Scherdel, Pauline; Reynaud, Rachel; Pietrement, Christine; Salaün, Jean-François; Bellaïche, Marc; Arnould, Michel; Chevallier, Bertrand; Piloquet, Hugues; Jobez, Emmanuel; Cheymol, Jacques; Bichara, Emmanuelle; Heude, Barbara; Chalumeau, Martin

    2017-01-01

    Growth monitoring of apparently healthy children aims at early detection of serious conditions through the use of both clinical expertise and algorithms that define abnormal growth. Optimization of growth monitoring requires standardization of the definition of abnormal growth, and the selection of the priority target conditions is a prerequisite of such standardization. To obtain a consensus about the priority target conditions for algorithms monitoring children's growth. We applied a formal consensus method with a modified version of the RAND/UCLA method, based on three phases (preparatory, literature review, and rating), with the participation of expert advisory groups from the relevant professional medical societies (ranging from primary care providers to hospital subspecialists) as well as parent associations. We asked experts in the pilot (n = 11), reading (n = 8) and rating (n = 60) groups to complete the list of diagnostic classification of the European Society for Paediatric Endocrinology and then to select the conditions meeting the four predefined criteria of an ideal type of priority target condition. Strong agreement was obtained for the 8 conditions selected by the experts among the 133 possible: celiac disease, Crohn disease, craniopharyngioma, juvenile nephronophthisis, Turner syndrome, growth hormone deficiency with pituitary stalk interruption syndrome, infantile cystinosis, and hypothalamic-optochiasmatic astrocytoma (in decreasing order of agreement). This national consensus can be used to evaluate the algorithms currently suggested for growth monitoring. The method used for this national consensus could be re-used to obtain an international consensus.

  10. Feature selection and back-projection algorithms for nonline-of-sight laser-gated viewing

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Velten, Andreas

    2014-11-01

    We discuss new approaches to analyze laser-gated viewing data for nonline-of-sight vision with a frame-to-frame back-projection as well as feature selection algorithms. Although first back-projection approaches use time transients for each pixel, our method has the ability to calculate the projection of imaging data on the voxel space for each frame. Further, different data analysis algorithms and their sequential application were studied with the aim of identifying and selecting signals from different target positions. A slight modification of commonly used filters leads to a powerful selection of local maximum values. It is demonstrated that the choice of the filter has an impact on the selectivity i.e., multiple target detection as well as on the localization precision.

  11. Clever eye algorithm for target detection of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Geng, Xiurui; Ji, Luyan; Sun, Kang

    2016-04-01

    Target detection algorithms for hyperspectral remote sensing imagery, such as the two most commonly used remote sensing detection algorithms, the constrained energy minimization (CEM) and matched filter (MF), can usually be attributed to the inner product between a weight filter (or detector) and a pixel vector. CEM and MF have the same expression except that MF requires data centralization first. However, this difference leads to a difference in the target detection results. That is to say, the selection of the data origin could directly affect the performance of the detector. Therefore, does there exist another data origin other than the zero and mean-vector points for a better target detection performance? This is a very meaningful issue in the field of target detection, but it has not been paid enough attention yet. In this study, we propose a novel objective function by introducing the data origin as another variable, and the solution of the function is corresponding to the data origin with the minimal output energy. The process of finding the optimal solution can be vividly regarded as a clever eye automatically searching the best observing position and direction in the feature space, which corresponds to the largest separation between the target and background. Therefore, this new algorithm is referred to as the clever eye algorithm (CE). Based on the Sherman-Morrison formula and the gradient ascent method, CE could derive the optimal target detection result in terms of energy. Experiments with both synthetic and real hyperspectral data have verified the effectiveness of our method.

  12. THINK OUTSIDE THE COLOR BOX: PROBABILISTIC TARGET SELECTION AND THE SDSS-XDQSOQUASAR TARGETING CATALOG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BOVY, J.; Sheldon, E.; Hennawi, J.F.

    2011-03-10

    We present the SDSS-XDQSO quasar targeting catalog for efficient flux-based quasar target selection down to the faint limit of the Sloan Digital Sky Survey (SDSS) catalog, even at medium redshifts (2.5 {approx}< z {approx}< 3) where the stellar contamination is significant. We build models of the distributions of stars and quasars in flux space down to the flux limit by applying the extreme-deconvolution method to estimate the underlying density. We convolve this density with the flux uncertainties when evaluating the probability that an object is a quasar. This approach results in a targeting algorithm that is more principled, more efficient,more » and faster than other similar methods. We apply the algorithm to derive low-redshift (z < 2.2), medium-redshift (2.2 {le} z {le} 3.5), and high-redshift (z > 3.5) quasar probabilities for all 160,904,060 point sources with dereddened i-band magnitude between 17.75 and 22.45 mag in the 14,555 deg{sup 2} of imaging from SDSS Data Release 8. The catalog can be used to define a uniformly selected and efficient low- or medium-redshift quasar survey, such as that needed for the SDSS-III's Baryon Oscillation Spectroscopic Survey project. We show that the XDQSO technique performs as well as the current best photometric quasar-selection technique at low redshift, and outperforms all other flux-based methods for selecting the medium-redshift quasars of our primary interest. We make code to reproduce the XDQSO quasar target selection publicly available.« less

  13. Robust autofocus algorithm for ISAR imaging of moving targets

    NASA Astrophysics Data System (ADS)

    Li, Jian; Wu, Renbiao; Chen, Victor C.

    2000-08-01

    A robust autofocus approach, referred to as AUTOCLEAN (AUTOfocus via CLEAN), is proposed for the motion compensation in ISAR (inverse synthetic aperture radar) imaging of moving targets. It is a parametric algorithm based on a very flexible data model which takes into account arbitrary range migration and arbitrary phase errors across the synthetic aperture that may be induced by unwanted radial motion of the target as well as propagation or system instability. AUTOCLEAN can be classified as a multiple scatterer algorithm (MSA), but it differs considerably from other existing MSAs in several aspects: (1) dominant scatterers are selected automatically in the two-dimensional (2-D) image domain; (2) scatterers may not be well-isolated or very dominant; (3) phase and RCS (radar cross section) information from each selected scatterer are combined in an optimal way; (4) the troublesome phase unwrapping step is avoided. AUTOCLEAN is computationally efficient and involves only a sequence of FFTs (fast Fourier Transforms). Another good feature associated with AUTOCLEAN is that its performance can be progressively improved by assuming a larger number of dominant scatterers for the target. Hence it can be easily configured for real-time applications including, for example, ATR (automatic target recognition) of non-cooperative moving targets, and for some other applications where the image quality is of the major concern but not the computational time including, for example, for the development and maintenance of low observable aircrafts. Numerical and experimental results have shown that AUTOCLEAN is a very robust autofocus tool for ISAR imaging.

  14. WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization

    NASA Astrophysics Data System (ADS)

    Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry

    2018-01-01

    We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.

  15. Selecting materialized views using random algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi

    2007-04-01

    The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.

  16. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  17. Algorithm research on infrared imaging target extraction based on GAC model

    NASA Astrophysics Data System (ADS)

    Li, Yingchun; Fan, Youchen; Wang, Yanqing

    2016-10-01

    Good target detection and tracking technique is significantly meaningful to increase infrared target detection distance and enhance resolution capacity. For the target detection problem about infrared imagining, firstly, the basic principles of level set method and GAC model are is analyzed in great detail. Secondly, "convergent force" is added according to the defect that GAC model is stagnant outside the deep concave region and cannot reach deep concave edge to build the promoted GAC model. Lastly, the self-adaptive detection method in combination of Sobel operation and GAC model is put forward by combining the advantages that subject position of the target could be detected with Sobel operator and the continuous edge of the target could be obtained through GAC model. In order to verify the effectiveness of the model, the two groups of experiments are carried out by selecting the images under different noise effects. Besides, the comparative analysis is conducted with LBF and LIF models. The experimental result shows that target could be better locked through LIF and LBF algorithms for the slight noise effect. The accuracy of segmentation is above 0.8. However, as for the strong noise effect, the target and noise couldn't be distinguished under the strong interference of GAC, LIF and LBF algorithms, thus lots of non-target parts are extracted during iterative process. The accuracy of segmentation is below 0.8. The accurate target position is extracted through the algorithm proposed in this paper. Besides, the accuracy of segmentation is above 0.8.

  18. Research on target tracking algorithm based on spatio-temporal context

    NASA Astrophysics Data System (ADS)

    Li, Baiping; Xu, Sanmei; Kang, Hongjuan

    2017-07-01

    In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.

  19. Target-type probability combining algorithms for multisensor tracking

    NASA Astrophysics Data System (ADS)

    Wigren, Torbjorn

    2001-08-01

    Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.

  20. HomoTarget: a new algorithm for prediction of microRNA targets in Homo sapiens.

    PubMed

    Ahmadi, Hamed; Ahmadi, Ali; Azimzadeh-Jamalkandi, Sadegh; Shoorehdeli, Mahdi Aliyari; Salehzadeh-Yazdi, Ali; Bidkhori, Gholamreza; Masoudi-Nejad, Ali

    2013-02-01

    MiRNAs play an essential role in the networks of gene regulation by inhibiting the translation of target mRNAs. Several computational approaches have been proposed for the prediction of miRNA target-genes. Reports reveal a large fraction of under-predicted or falsely predicted target genes. Thus, there is an imperative need to develop a computational method by which the target mRNAs of existing miRNAs can be correctly identified. In this study, combined pattern recognition neural network (PRNN) and principle component analysis (PCA) architecture has been proposed in order to model the complicated relationship between miRNAs and their target mRNAs in humans. The results of several types of intelligent classifiers and our proposed model were compared, showing that our algorithm outperformed them with higher sensitivity and specificity. Using the recent release of the mirBase database to find potential targets of miRNAs, this model incorporated twelve structural, thermodynamic and positional features of miRNA:mRNA binding sites to select target candidates. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  2. Determination of target detection limits in hyperspectral data using band selection and dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Gross, W.; Boehler, J.; Twizer, K.; Kedem, B.; Lenz, A.; Kneubuehler, M.; Wellig, P.; Oechslin, R.; Schilling, H.; Rotman, S.; Middelmann, W.

    2016-10-01

    Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally

  3. A Deterministic Approach to Active Debris Removal Target Selection

    NASA Astrophysics Data System (ADS)

    Lidtke, A.; Lewis, H.; Armellin, R.

    2014-09-01

    Many decisions, with widespread economic, political and legal consequences, are being considered based on space debris simulations that show that Active Debris Removal (ADR) may be necessary as the concerns about the sustainability of spaceflight are increasing. The debris environment predictions are based on low-accuracy ephemerides and propagators. This raises doubts about the accuracy of those prognoses themselves but also the potential ADR target-lists that are produced. Target selection is considered highly important as removal of many objects will increase the overall mission cost. Selecting the most-likely candidates as soon as possible would be desirable as it would enable accurate mission design and allow thorough evaluation of in-orbit validations, which are likely to occur in the near-future, before any large investments are made and implementations realized. One of the primary factors that should be used in ADR target selection is the accumulated collision probability of every object. A conjunction detection algorithm, based on the smart sieve method, has been developed. Another algorithm is then applied to the found conjunctions to compute the maximum and true probabilities of collisions taking place. The entire framework has been verified against the Conjunction Analysis Tools in AGIs Systems Toolkit and relative probability error smaller than 1.5% has been achieved in the final maximum collision probability. Two target-lists are produced based on the ranking of the objects according to the probability they will take part in any collision over the simulated time window. These probabilities are computed using the maximum probability approach, that is time-invariant, and estimates of the true collision probability that were computed with covariance information. The top-priority targets are compared, and the impacts of the data accuracy and its decay are highlighted. General conclusions regarding the importance of Space Surveillance and Tracking for the

  4. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image

  5. Comparison of human and algorithmic target detection in passive infrared imagery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.; Hutchinson, Meredith

    2003-09-01

    We have designed an experiment that compares the performance of human observers and a scale-insensitive target detection algorithm that uses pixel level information for the detection of ground targets in passive infrared imagery. The test database contains targets near clutter whose detectability ranged from easy to very difficult. Results indicate that human observers detect more "easy-to-detect" targets, and with far fewer false alarms, than the algorithm. For "difficult-to-detect" targets, human and algorithm detection rates are considerably degraded, and algorithm false alarms excessive. Analysis of detections as a function of observer confidence shows that algorithm confidence attribution does not correspond to human attribution, and does not adequately correlate with correct detections. The best target detection score for any human observer was 84%, as compared to 55% for the algorithm for the same false alarm rate. At 81%, the maximum detection score for the algorithm, the same human observer had 6 false alarms per frame as compared to 29 for the algorithm. Detector ROC curves and observer-confidence analysis benchmarks the algorithm and provides insights into algorithm deficiencies and possible paths to improvement.

  6. Computational selection of antibody-drug conjugate targets for breast cancer

    PubMed Central

    Fauteux, François; Hill, Jennifer J.; Jaramillo, Maria L.; Pan, Youlian; Phan, Sieu; Famili, Fazel; O'Connor-McCourt, Maureen

    2016-01-01

    The selection of therapeutic targets is a critical aspect of antibody-drug conjugate research and development. In this study, we applied computational methods to select candidate targets overexpressed in three major breast cancer subtypes as compared with a range of vital organs and tissues. Microarray data corresponding to over 8,000 tissue samples were collected from the public domain. Breast cancer samples were classified into molecular subtypes using an iterative ensemble approach combining six classification algorithms and three feature selection techniques, including a novel kernel density-based method. This feature selection method was used in conjunction with differential expression and subcellular localization information to assemble a primary list of targets. A total of 50 cell membrane targets were identified, including one target for which an antibody-drug conjugate is in clinical use, and six targets for which antibody-drug conjugates are in clinical trials for the treatment of breast cancer and other solid tumors. In addition, 50 extracellular proteins were identified as potential targets for non-internalizing strategies and alternative modalities. Candidate targets linked with the epithelial-to-mesenchymal transition were identified by analyzing differential gene expression in epithelial and mesenchymal tumor-derived cell lines. Overall, these results show that mining human gene expression data has the power to select and prioritize breast cancer antibody-drug conjugate targets, and the potential to lead to new and more effective cancer therapeutics. PMID:26700623

  7. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  8. The SDSS-IV extended baryon oscillation spectroscopic survey: Luminous red galaxy target selection

    DOE PAGES

    Prakash, Abhishek; Licquia, Timothy C.; Newman, Jeffrey A.; ...

    2016-06-08

    Here, we describe the algorithm used to select the luminous red galaxy (LRG) sample for the extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey IV (SDSS-IV) using photometric data from both the SDSS and the Wide-field Infrared Survey Explorer. LRG targets are required to meet a set of color selection criteria and have z-band and i-band MODEL magnitudes z < 19.95 and 19.9 < i < 21.8, respectively. Our algorithm selects roughly 50 LRG targets per square degree, the great majority of which lie in the redshift range 0.6 < z < 1.0 (median redshift 0.71).more » We also demonstrate that our methods are highly effective at eliminating stellar contamination and lower-redshift galaxies. We perform a number of tests using spectroscopic data from SDSS-III/BOSS ancillary programs to determine the redshift reliability of our target selection and its ability to meet the science requirements of eBOSS. The SDSS spectra are of high enough signal-to-noise ratio that at least ~89% of the target sample yields secure redshift measurements. Finally, we present tests of the uniformity and homogeneity of the sample, demonstrating that it should be clean enough for studies of the large-scale structure of the universe at higher redshifts than SDSS-III/BOSS LRGs reached.« less

  9. THE SDSS-IV EXTENDED BARYON OSCILLATION SPECTROSCOPIC SURVEY: LUMINOUS RED GALAXY TARGET SELECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, Abhishek; Licquia, Timothy C.; Newman, Jeffrey A.

    2016-06-01

    We describe the algorithm used to select the luminous red galaxy (LRG) sample for the extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey IV (SDSS-IV) using photometric data from both the SDSS and the Wide-field Infrared Survey Explorer . LRG targets are required to meet a set of color selection criteria and have z -band and i -band MODEL magnitudes z < 19.95 and 19.9 < i < 21.8, respectively. Our algorithm selects roughly 50 LRG targets per square degree, the great majority of which lie in the redshift range 0.6 < z < 1.0 (medianmore » redshift 0.71). We demonstrate that our methods are highly effective at eliminating stellar contamination and lower-redshift galaxies. We perform a number of tests using spectroscopic data from SDSS-III/BOSS ancillary programs to determine the redshift reliability of our target selection and its ability to meet the science requirements of eBOSS. The SDSS spectra are of high enough signal-to-noise ratio that at least ∼89% of the target sample yields secure redshift measurements. We also present tests of the uniformity and homogeneity of the sample, demonstrating that it should be clean enough for studies of the large-scale structure of the universe at higher redshifts than SDSS-III/BOSS LRGs reached.« less

  10. Sentiment analysis enhancement with target variable in Kumar’s Algorithm

    NASA Astrophysics Data System (ADS)

    Arman, A. A.; Kawi, A. B.; Hurriyati, R.

    2016-04-01

    Sentiment analysis (also known as opinion mining) refers to the use of text analysis and computational linguistics to identify and extract subjective information in source materials. Sentiment analysis is widely applied to reviews discussion that is being talked in social media for many purposes, ranging from marketing, customer service, or public opinion of public policy. One of the popular algorithm for Sentiment Analysis implementation is Kumar algorithm that developed by Kumar and Sebastian. Kumar algorithm can identify the sentiment score of the statement, sentence or tweet, but cannot determine the relationship of the object or target related to the sentiment being analysed. This research proposed solution for that challenge by adding additional component that represent object or target to the existing algorithm (Kumar algorithm). The result of this research is a modified algorithm that can give sentiment score based on a given object or target.

  11. Selected-node stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Duso, Lorenzo; Zechner, Christoph

    2018-04-01

    Stochastic simulations of biochemical networks are of vital importance for understanding complex dynamics in cells and tissues. However, existing methods to perform such simulations are associated with computational difficulties and addressing those remains a daunting challenge to the present. Here we introduce the selected-node stochastic simulation algorithm (snSSA), which allows us to exclusively simulate an arbitrary, selected subset of molecular species of a possibly large and complex reaction network. The algorithm is based on an analytical elimination of chemical species, thereby avoiding explicit simulation of the associated chemical events. These species are instead described continuously in terms of statistical moments derived from a stochastic filtering equation, resulting in a substantial speedup when compared to Gillespie's stochastic simulation algorithm (SSA). Moreover, we show that statistics obtained via snSSA profit from a variance reduction, which can significantly lower the number of Monte Carlo samples needed to achieve a certain performance. We demonstrate the algorithm using several biological case studies for which the simulation time could be reduced by orders of magnitude.

  12. Investigation of frame-to-frame back projection and feature selection algorithms for non-line-of-sight laser gated viewing

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Velten, Andreas

    2014-10-01

    In the present paper, we discuss new approaches to analyze laser gated viewing data for non-line-of-sight vision with a novel frame-to-frame back projection as well as feature selection algorithms. While first back projection approaches use time transients for each pixel, our new method has the ability to calculate the projection of imaging data on the obscured voxel space for each frame. Further, four different data analysis algorithms were studied with the aim to identify and select signals from different target positions. A slight modification of commonly used filters leads to powerful selection of local maximum values. It is demonstrated that the choice of the filter has impact on the selectivity i.e. multiple target detection as well as on the localization precision.

  13. Design of an algorithm for autonomous docking with a freely tumbling target

    NASA Astrophysics Data System (ADS)

    Nolet, Simon; Kong, Edmund; Miller, David W.

    2005-05-01

    For complex unmanned docking missions, limited communication bandwidth and delays do not allow ground operators to have immediate access to all real-time state information and hence prevent them from playing an active role in the control loop. Advanced control algorithms are needed to make mission critical decisions to ensure safety of both spacecraft during close proximity maneuvers. This is especially true when unexpected contingencies occur. These algorithms will enable multiple space missions, including servicing of damaged spacecraft and missions to Mars. A key characteristic of spacecraft servicing missions is that the target spacecraft is likely to be freely tumbling due to various mechanical failures or fuel depletion. Very few technical references in the literature can be found on autonomous docking with a freely tumbling target and very few such maneuvers have been attempted. The MIT Space Systems Laboratory (SSL) is currently performing research on the subject. The objective of this research is to develop a control architecture that will enable safe and fuel-efficient docking of a thruster based spacecraft with a freely tumbling target in presence of obstacles and contingencies. The approach is to identify, select and implement state estimation, fault detection, isolation and recovery, optimal path planning and thruster management algorithms that, once properly integrated, can accomplish such a maneuver autonomously. Simulations and demonstrations on the SPHERES testbed developed by the MIT SSL will be executed to assess the performance of different combinations of algorithms. To date, experiments have been carried out at the MIT SSL 2-D Laboratory and at the NASA Marshall Space Flight Center (MSFC) flat floor.

  14. A real negative selection algorithm with evolutionary preference for anomaly detection

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Chen, Wen; Li, Tao

    2017-04-01

    Traditional real negative selection algorithms (RNSAs) adopt the estimated coverage (c0) as the algorithm termination threshold, and generate detectors randomly. With increasing dimensions, the data samples could reside in the low-dimensional subspace, so that the traditional detectors cannot effectively distinguish these samples. Furthermore, in high-dimensional feature space, c0 cannot exactly reflect the detectors set coverage rate for the nonself space, and it could lead the algorithm to be terminated unexpectedly when the number of detectors is insufficient. These shortcomings make the traditional RNSAs to perform poorly in high-dimensional feature space. Based upon "evolutionary preference" theory in immunology, this paper presents a real negative selection algorithm with evolutionary preference (RNSAP). RNSAP utilizes the "unknown nonself space", "low-dimensional target subspace" and "known nonself feature" as the evolutionary preference to guide the generation of detectors, thus ensuring the detectors can cover the nonself space more effectively. Besides, RNSAP uses redundancy to replace c0 as the termination threshold, in this way RNSAP can generate adequate detectors under a proper convergence rate. The theoretical analysis and experimental result demonstrate that, compared to the classical RNSA (V-detector), RNSAP can achieve a higher detection rate, but with less detectors and computing cost.

  15. THE SDSS-III BARYON OSCILLATION SPECTROSCOPIC SURVEY: QUASAR TARGET SELECTION FOR DATA RELEASE NINE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Nicholas P.; Kirkpatrick, Jessica A.; Carithers, William C.

    2012-03-01

    The SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS), a five-year spectroscopic survey of 10,000 deg{sup 2}, achieved first light in late 2009. One of the key goals of BOSS is to measure the signature of baryon acoustic oscillations (BAOs) in the distribution of Ly{alpha} absorption from the spectra of a sample of {approx}150,000 z > 2.2 quasars. Along with measuring the angular diameter distance at z Almost-Equal-To 2.5, BOSS will provide the first direct measurement of the expansion rate of the universe at z > 2. One of the biggest challenges in achieving this goal is an efficient target selection algorithmmore » for quasars in the redshift range 2.2 < z < 3.5, where their colors tend to overlap those of the far more numerous stars. During the first year of the BOSS survey, quasar target selection (QTS) methods were developed and tested to meet the requirement of delivering at least 15 quasars deg{sup -2} in this redshift range, with a goal of 20 out of 40 targets deg{sup -2} allocated to the quasar survey. To achieve these surface densities, the magnitude limit of the quasar targets was set at g {<=} 22.0 or r {<=} 21.85. While detection of the BAO signature in the distribution of Ly{alpha} absorption in quasar spectra does not require a uniform target selection algorithm, many other astrophysical studies do. We have therefore defined a uniformly selected subsample of 20 targets deg{sup -2}, for which the selection efficiency is just over 50% ({approx}10 z > 2.20 quasars deg{sup -2}). This 'CORE' subsample will be fixed for Years Two through Five of the survey. For the remaining 20 targets deg{sup -2}, we will continue to develop improved selection techniques, including the use of additional data sets beyond the Sloan Digital Sky Survey (SDSS) imaging data. In this paper, we describe the evolution and implementation of the BOSS QTS algorithms during the first two years of BOSS operations (through 2011 July), in support of the science investigations

  16. Improved target detection algorithm using Fukunaga-Koontz transform and distance classifier correlation filter

    NASA Astrophysics Data System (ADS)

    Bal, A.; Alam, M. S.; Aslan, M. S.

    2006-05-01

    Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.

  17. Prediction of microRNA target genes using an efficient genetic algorithm-based decision tree.

    PubMed

    Rabiee-Ghahfarrokhi, Behzad; Rafiei, Fariba; Niknafs, Ali Akbar; Zamani, Behzad

    2015-01-01

    MicroRNAs (miRNAs) are small, non-coding RNA molecules that regulate gene expression in almost all plants and animals. They play an important role in key processes, such as proliferation, apoptosis, and pathogen-host interactions. Nevertheless, the mechanisms by which miRNAs act are not fully understood. The first step toward unraveling the function of a particular miRNA is the identification of its direct targets. This step has shown to be quite challenging in animals primarily because of incomplete complementarities between miRNA and target mRNAs. In recent years, the use of machine-learning techniques has greatly increased the prediction of miRNA targets, avoiding the need for costly and time-consuming experiments to achieve miRNA targets experimentally. Among the most important machine-learning algorithms are decision trees, which classify data based on extracted rules. In the present work, we used a genetic algorithm in combination with C4.5 decision tree for prediction of miRNA targets. We applied our proposed method to a validated human datasets. We nearly achieved 93.9% accuracy of classification, which could be related to the selection of best rules.

  18. Prediction of microRNA target genes using an efficient genetic algorithm-based decision tree

    PubMed Central

    Rabiee-Ghahfarrokhi, Behzad; Rafiei, Fariba; Niknafs, Ali Akbar; Zamani, Behzad

    2015-01-01

    MicroRNAs (miRNAs) are small, non-coding RNA molecules that regulate gene expression in almost all plants and animals. They play an important role in key processes, such as proliferation, apoptosis, and pathogen–host interactions. Nevertheless, the mechanisms by which miRNAs act are not fully understood. The first step toward unraveling the function of a particular miRNA is the identification of its direct targets. This step has shown to be quite challenging in animals primarily because of incomplete complementarities between miRNA and target mRNAs. In recent years, the use of machine-learning techniques has greatly increased the prediction of miRNA targets, avoiding the need for costly and time-consuming experiments to achieve miRNA targets experimentally. Among the most important machine-learning algorithms are decision trees, which classify data based on extracted rules. In the present work, we used a genetic algorithm in combination with C4.5 decision tree for prediction of miRNA targets. We applied our proposed method to a validated human datasets. We nearly achieved 93.9% accuracy of classification, which could be related to the selection of best rules. PMID:26649272

  19. Performance of resonant radar target identification algorithms using intra-class weighting functions

    NASA Astrophysics Data System (ADS)

    Mustafa, A.

    The use of calibrated resonant-region radar cross section (RCS) measurements of targets for the classification of large aircraft is discussed. Errors in the RCS estimate of full scale aircraft flying over an ocean, introduced by the ionospheric variability and the sea conditions were studied. The Weighted Target Representative (WTR) classification algorithm was developed, implemented, tested and compared with the nearest neighbor (NN) algorithm. The WTR-algorithm has a low sensitivity to the uncertainty in the aspect angle of the unknown target returns. In addition, this algorithm was based on the development of a new catalog of representative data which reduces the storage requirements and increases the computational efficiency of the classification system compared to the NN-algorithm. Experiments were designed to study and evaluate the characteristics of the WTR- and the NN-algorithms, investigate the classifiability of targets and study the relative behavior of the number of misclassifications as a function of the target backscatter features. The classification results and statistics were shown in the form of performance curves, performance tables and confusion tables.

  20. Target Selection for the SDSS-IV APOGEE-2 Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zasowski, G.; Cohen, R. E.; Carlberg, J. K.

    APOGEE-2 is a high-resolution, near-infrared spectroscopic survey observing ∼3 × 10{sup 5} stars across the entire sky. It is the successor to APOGEE and is part of the Sloan Digital Sky Survey IV (SDSS-IV). APOGEE-2 is expanding on APOGEE’s goals of addressing critical questions of stellar astrophysics, stellar populations, and Galactic chemodynamical evolution using (1) an enhanced set of target types and (2) a second spectrograph at Las Campanas Observatory in Chile. APOGEE-2 is targeting red giant branch and red clump stars, RR Lyrae, low-mass dwarf stars, young stellar objects, and numerous other Milky Way and Local Group sources across the entiremore » sky from both hemispheres. In this paper, we describe the APOGEE-2 observational design, target selection catalogs and algorithms, and the targeting-related documentation included in the SDSS data releases.« less

  1. Ad Hoc Access Gateway Selection Algorithm

    NASA Astrophysics Data System (ADS)

    Jie, Liu

    With the continuous development of mobile communication technology, Ad Hoc access network has become a hot research, Ad Hoc access network nodes can be used to expand capacity of multi-hop communication range of mobile communication system, even business adjacent to the community, improve edge data rates. For mobile nodes in Ad Hoc network to internet, internet communications in the peer nodes must be achieved through the gateway. Therefore, the key Ad Hoc Access Networks will focus on the discovery gateway, as well as gateway selection in the case of multi-gateway and handover problems between different gateways. This paper considers the mobile node and the gateway, based on the average number of hops from an average access time and the stability of routes, improved gateway selection algorithm were proposed. An improved gateway selection algorithm, which mainly considers the algorithm can improve the access time of Ad Hoc nodes and the continuity of communication between the gateways, were proposed. This can improve the quality of communication across the network.

  2. Burglar Target Selection

    PubMed Central

    Townsley, Michael; Bernasco, Wim; Ruiter, Stijn; Johnson, Shane D.; White, Gentry; Baum, Scott

    2015-01-01

    Objectives: This study builds on research undertaken by Bernasco and Nieuwbeerta and explores the generalizability of a theoretically derived offender target selection model in three cross-national study regions. Methods: Taking a discrete spatial choice approach, we estimate the impact of both environment- and offender-level factors on residential burglary placement in the Netherlands, the United Kingdom, and Australia. Combining cleared burglary data from all study regions in a single statistical model, we make statistical comparisons between environments. Results: In all three study regions, the likelihood an offender selects an area for burglary is positively influenced by proximity to their home, the proportion of easily accessible targets, and the total number of targets available. Furthermore, in two of the three study regions, juvenile offenders under the legal driving age are significantly more influenced by target proximity than adult offenders. Post hoc tests indicate the magnitudes of these impacts vary significantly between study regions. Conclusions: While burglary target selection strategies are consistent with opportunity-based explanations of offending, the impact of environmental context is significant. As such, the approach undertaken in combining observations from multiple study regions may aid criminology scholars in assessing the generalizability of observed findings across multiple environments. PMID:25866418

  3. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  4. Multitarget mixture reduction algorithm with incorporated target existence recursions

    NASA Astrophysics Data System (ADS)

    Ristic, Branko; Arulampalam, Sanjeev

    2000-07-01

    The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.

  5. A Network Selection Algorithm Considering Power Consumption in Hybrid Wireless Networks

    NASA Astrophysics Data System (ADS)

    Joe, Inwhee; Kim, Won-Tae; Hong, Seokjoon

    In this paper, we propose a novel network selection algorithm considering power consumption in hybrid wireless networks for vertical handover. CDMA, WiBro, WLAN networks are candidate networks for this selection algorithm. This algorithm is composed of the power consumption prediction algorithm and the final network selection algorithm. The power consumption prediction algorithm estimates the expected lifetime of the mobile station based on the current battery level, traffic class and power consumption for each network interface card of the mobile station. If the expected lifetime of the mobile station in a certain network is not long enough compared the handover delay, this particular network will be removed from the candidate network list, thereby preventing unnecessary handovers in the preprocessing procedure. On the other hand, the final network selection algorithm consists of AHP (Analytic Hierarchical Process) and GRA (Grey Relational Analysis). The global factors of the network selection structure are QoS, cost and lifetime. If user preference is lifetime, our selection algorithm selects the network that offers longest service duration due to low power consumption. Also, we conduct some simulations using the OPNET simulation tool. The simulation results show that the proposed algorithm provides longer lifetime in the hybrid wireless network environment.

  6. FOXP2 Targets Show Evidence of Positive Selection in European Populations

    PubMed Central

    Ayub, Qasim; Yngvadottir, Bryndis; Chen, Yuan; Xue, Yali; Hu, Min; Vernes, Sonja C.; Fisher, Simon E.; Tyler-Smith, Chris

    2013-01-01

    Forkhead box P2 (FOXP2) is a highly conserved transcription factor that has been implicated in human speech and language disorders and plays important roles in the plasticity of the developing brain. The pattern of nucleotide polymorphisms in FOXP2 in modern populations suggests that it has been the target of positive (Darwinian) selection during recent human evolution. In our study, we searched for evidence of selection that might have followed FOXP2 adaptations in modern humans. We examined whether or not putative FOXP2 targets identified by chromatin-immunoprecipitation genomic screening show evidence of positive selection. We developed an algorithm that, for any given gene list, systematically generates matched lists of control genes from the Ensembl database, collates summary statistics for three frequency-spectrum-based neutrality tests from the low-coverage resequencing data of the 1000 Genomes Project, and determines whether these statistics are significantly different between the given gene targets and the set of controls. Overall, there was strong evidence of selection of FOXP2 targets in Europeans, but not in the Han Chinese, Japanese, or Yoruba populations. Significant outliers included several genes linked to cellular movement, reproduction, development, and immune cell trafficking, and 13 of these constituted a significant network associated with cardiac arteriopathy. Strong signals of selection were observed for CNTNAP2 and RBFOX1, key neurally expressed genes that have been consistently identified as direct FOXP2 targets in multiple studies and that have themselves been associated with neurodevelopmental disorders involving language dysfunction. PMID:23602712

  7. Penalty dynamic programming algorithm for dim targets detection in sensor systems.

    PubMed

    Huang, Dayu; Xue, Anke; Guo, Yunfei

    2012-01-01

    In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations.

  8. Penalty Dynamic Programming Algorithm for Dim Targets Detection in Sensor Systems

    PubMed Central

    Huang, Dayu; Xue, Anke; Guo, Yunfei

    2012-01-01

    In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations. PMID:22666074

  9. Instances selection algorithm by ensemble margin

    NASA Astrophysics Data System (ADS)

    Saidi, Meryem; Bechar, Mohammed El Amine; Settouti, Nesma; Chikh, Mohamed Amine

    2018-05-01

    The main limit of data mining algorithms is their inability to deal with the huge amount of available data in a reasonable processing time. A solution of producing fast and accurate results is instances and features selection. This process eliminates noisy or redundant data in order to reduce the storage and computational cost without performances degradation. In this paper, a new instance selection approach called Ensemble Margin Instance Selection (EMIS) algorithm is proposed. This approach is based on the ensemble margin. To evaluate our approach, we have conducted several experiments on different real-world classification problems from UCI Machine learning repository. The pixel-based image segmentation is a field where the storage requirement and computational cost of applied model become higher. To solve these limitations we conduct a study based on the application of EMIS and other instance selection techniques for the segmentation and automatic recognition of white blood cells WBC (nucleus and cytoplasm) in cytological images.

  10. Multiobjective immune algorithm with nondominated neighbor-based selection.

    PubMed

    Gong, Maoguo; Jiao, Licheng; Du, Haifeng; Bo, Liefeng

    2008-01-01

    Abstract Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.

  11. Fuzzy Neural Network-Based Interacting Multiple Model for Multi-Node Target Tracking Algorithm

    PubMed Central

    Sun, Baoliang; Jiang, Chunlan; Li, Ming

    2016-01-01

    An interacting multiple model for multi-node target tracking algorithm was proposed based on a fuzzy neural network (FNN) to solve the multi-node target tracking problem of wireless sensor networks (WSNs). Measured error variance was adaptively adjusted during the multiple model interacting output stage using the difference between the theoretical and estimated values of the measured error covariance matrix. The FNN fusion system was established during multi-node fusion to integrate with the target state estimated data from different nodes and consequently obtain network target state estimation. The feasibility of the algorithm was verified based on a network of nine detection nodes. Experimental results indicated that the proposed algorithm could trace the maneuvering target effectively under sensor failure and unknown system measurement errors. The proposed algorithm exhibited great practicability in the multi-node target tracking of WSNs. PMID:27809271

  12. Whole genome analysis of CRISPR Cas9 sgRNA off-target homologies via an efficient computational algorithm.

    PubMed

    Zhou, Hong; Zhou, Michael; Li, Daisy; Manthey, Joseph; Lioutikova, Ekaterina; Wang, Hong; Zeng, Xiao

    2017-11-17

    The beauty and power of the genome editing mechanism, CRISPR Cas9 endonuclease system, lies in the fact that it is RNA-programmable such that Cas9 can be guided to any genomic loci complementary to a 20-nt RNA, single guide RNA (sgRNA), to cleave double stranded DNA, allowing the introduction of wanted mutations. Unfortunately, it has been reported repeatedly that the sgRNA can also guide Cas9 to off-target sites where the DNA sequence is homologous to sgRNA. Using human genome and Streptococcus pyogenes Cas9 (SpCas9) as an example, this article mathematically analyzed the probabilities of off-target homologies of sgRNAs and discovered that for large genome size such as human genome, potential off-target homologies are inevitable for sgRNA selection. A highly efficient computationl algorithm was developed for whole genome sgRNA design and off-target homology searches. By means of a dynamically constructed sequence-indexed database and a simplified sequence alignment method, this algorithm achieves very high efficiency while guaranteeing the identification of all existing potential off-target homologies. Via this algorithm, 1,876,775 sgRNAs were designed for the 19,153 human mRNA genes and only two sgRNAs were found to be free of off-target homology. By means of the novel and efficient sgRNA homology search algorithm introduced in this article, genome wide sgRNA design and off-target analysis were conducted and the results confirmed the mathematical analysis that for a sgRNA sequence, it is almost impossible to escape potential off-target homologies. Future innovations on the CRISPR Cas9 gene editing technology need to focus on how to eliminate the Cas9 off-target activity.

  13. Improved algorithm of ray tracing in ICF cryogenic targets

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Yang, Yongying; Ling, Tong; Jiang, Jiabin

    2016-10-01

    The high precision ray tracing inside inertial confinement fusion (ICF) cryogenic targets plays an important role in the reconstruction of the three-dimensional density distribution by algebraic reconstruction technique (ART) algorithm. The traditional Runge-Kutta methods, which is restricted by the precision of the grid division and the step size of ray tracing, cannot make an accurate calculation in the case of refractive index saltation. In this paper, we propose an improved algorithm of ray tracing based on the Runge-Kutta methods and Snell's law of refraction to achieve high tracing precision. On the boundary of refractive index, we apply Snell's law of refraction and contact point search algorithm to ensure accuracy of the simulation. Inside the cryogenic target, the combination of the Runge-Kutta methods and self-adaptive step algorithm are employed for computation. The original refractive index data, which is used to mesh the target, can be obtained by experimental measurement or priori refractive index distribution function. A finite differential method is performed to calculate the refractive index gradient of mesh nodes, and the distance weighted average interpolation methods is utilized to obtain refractive index and gradient of each point in space. In the simulation, we take ideal ICF target, Luneberg lens and Graded index rod as simulation model to calculate the spot diagram and wavefront map. Compared the simulation results to Zemax, it manifests that the improved algorithm of ray tracing based on the fourth-order Runge-Kutta methods and Snell's law of refraction exhibits high accuracy. The relative error of the spot diagram is 0.2%, and the peak-to-valley (PV) error and the root-mean-square (RMS) error of the wavefront map is less than λ/35 and λ/100, correspondingly.

  14. Performing target specific band reduction using artificial neural networks and assessment of its efficacy using various target detection algorithms

    NASA Astrophysics Data System (ADS)

    Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.

    2016-04-01

    Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target detection is verified. The results of ANN proposes discriminating band range for targets; these ranges were further used to perform target detection using four popular spectral matching target detection algorithm. Further, the results of algorithms were analyzed using ROC curves to evaluate the effectiveness of the ranges suggested by ANN over full spectrum for detection of desired targets. In addition, comparative assessment of algorithms is also performed using ROC.

  15. FOXP2 targets show evidence of positive selection in European populations.

    PubMed

    Ayub, Qasim; Yngvadottir, Bryndis; Chen, Yuan; Xue, Yali; Hu, Min; Vernes, Sonja C; Fisher, Simon E; Tyler-Smith, Chris

    2013-05-02

    Forkhead box P2 (FOXP2) is a highly conserved transcription factor that has been implicated in human speech and language disorders and plays important roles in the plasticity of the developing brain. The pattern of nucleotide polymorphisms in FOXP2 in modern populations suggests that it has been the target of positive (Darwinian) selection during recent human evolution. In our study, we searched for evidence of selection that might have followed FOXP2 adaptations in modern humans. We examined whether or not putative FOXP2 targets identified by chromatin-immunoprecipitation genomic screening show evidence of positive selection. We developed an algorithm that, for any given gene list, systematically generates matched lists of control genes from the Ensembl database, collates summary statistics for three frequency-spectrum-based neutrality tests from the low-coverage resequencing data of the 1000 Genomes Project, and determines whether these statistics are significantly different between the given gene targets and the set of controls. Overall, there was strong evidence of selection of FOXP2 targets in Europeans, but not in the Han Chinese, Japanese, or Yoruba populations. Significant outliers included several genes linked to cellular movement, reproduction, development, and immune cell trafficking, and 13 of these constituted a significant network associated with cardiac arteriopathy. Strong signals of selection were observed for CNTNAP2 and RBFOX1, key neurally expressed genes that have been consistently identified as direct FOXP2 targets in multiple studies and that have themselves been associated with neurodevelopmental disorders involving language dysfunction. Copyright © 2013 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  16. SamSelect: a sample sequence selection algorithm for quorum planted motif search on large DNA datasets.

    PubMed

    Yu, Qiang; Wei, Dingbang; Huo, Hongwei

    2018-06-18

    Given a set of t n-length DNA sequences, q satisfying 0 < q ≤ 1, and l and d satisfying 0 ≤ d < l < n, the quorum planted motif search (qPMS) finds l-length strings that occur in at least qt input sequences with up to d mismatches and is mainly used to locate transcription factor binding sites in DNA sequences. Existing qPMS algorithms have been able to efficiently process small standard datasets (e.g., t = 20 and n = 600), but they are too time consuming to process large DNA datasets, such as ChIP-seq datasets that contain thousands of sequences or more. We analyze the effects of t and q on the time performance of qPMS algorithms and find that a large t or a small q causes a longer computation time. Based on this information, we improve the time performance of existing qPMS algorithms by selecting a sample sequence set D' with a small t and a large q from the large input dataset D and then executing qPMS algorithms on D'. A sample sequence selection algorithm named SamSelect is proposed. The experimental results on both simulated and real data show (1) that SamSelect can select D' efficiently and (2) that the qPMS algorithms executed on D' can find implanted or real motifs in a significantly shorter time than when executed on D. We improve the ability of existing qPMS algorithms to process large DNA datasets from the perspective of selecting high-quality sample sequence sets so that the qPMS algorithms can find motifs in a short time in the selected sample sequence set D', rather than take an unfeasibly long time to search the original sequence set D. Our motif discovery method is an approximate algorithm.

  17. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  18. The research on the mean shift algorithm for target tracking

    NASA Astrophysics Data System (ADS)

    CAO, Honghong

    2017-06-01

    The traditional mean shift algorithm for target tracking is effective and high real-time, but there still are some shortcomings. The traditional mean shift algorithm is easy to fall into local optimum in the tracking process, the effectiveness of the method is weak when the object is moving fast. And the size of the tracking window never changes, the method will fail when the size of the moving object changes, as a result, we come up with a new method. We use particle swarm optimization algorithm to optimize the mean shift algorithm for target tracking, Meanwhile, SIFT (scale-invariant feature transform) and affine transformation make the size of tracking window adaptive. At last, we evaluate the method by comparing experiments. Experimental result indicates that the proposed method can effectively track the object and the size of the tracking window changes.

  19. Research of maneuvering target prediction and tracking technology based on IMM algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Zheng; Mao, Yao; Deng, Chao; Liu, Qiong; Chen, Jing

    2016-09-01

    Maneuvering target prediction and tracking technology is widely used in both military and civilian applications, the study of those technologies is all along the hotspot and difficulty. In the Electro-Optical acquisition-tracking-pointing system (ATP), the primary traditional maneuvering targets are ballistic target, large aircraft and other big targets. Those targets have the features of fast velocity and a strong regular trajectory and Kalman Filtering and polynomial fitting have good effects when they are used to track those targets. In recent years, the small unmanned aerial vehicles developed rapidly for they are small, nimble and simple operation. The small unmanned aerial vehicles have strong maneuverability in the observation system of ATP although they are close-in, slow and small targets. Moreover, those vehicles are under the manual operation, therefore, the acceleration of them changes greatly and they move erratically. So the prediction and tracking precision is low when traditional algorithms are used to track the maneuvering fly of those targets, such as speeding up, turning, climbing and so on. The interacting multiple model algorithm (IMM) use multiple models to match target real movement trajectory, there are interactions between each model. The IMM algorithm can switch model based on a Markov chain to adapt to the change of target movement trajectory, so it is suitable to solve the prediction and tracking problems of the small unmanned aerial vehicles because of the better adaptability of irregular movement. This paper has set up model set of constant velocity model (CV), constant acceleration model (CA), constant turning model (CT) and current statistical model. And the results of simulating and analyzing the real movement trajectory data of the small unmanned aerial vehicles show that the prediction and tracking technology based on the interacting multiple model algorithm can get relatively lower tracking error and improve tracking precision

  20. Selective epidemic vaccination under the performant routing algorithms

    NASA Astrophysics Data System (ADS)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  1. Alternating evolutionary pressure in a genetic algorithm facilitates protein model selection

    PubMed Central

    Offman, Marc N; Tournier, Alexander L; Bates, Paul A

    2008-01-01

    Background Automatic protein modelling pipelines are becoming ever more accurate; this has come hand in hand with an increasingly complicated interplay between all components involved. Nevertheless, there are still potential improvements to be made in template selection, refinement and protein model selection. Results In the context of an automatic modelling pipeline, we analysed each step separately, revealing several non-intuitive trends and explored a new strategy for protein conformation sampling using Genetic Algorithms (GA). We apply the concept of alternating evolutionary pressure (AEP), i.e. intermediate rounds within the GA runs where unrestrained, linear growth of the model populations is allowed. Conclusion This approach improves the overall performance of the GA by allowing models to overcome local energy barriers. AEP enabled the selection of the best models in 40% of all targets; compared to 25% for a normal GA. PMID:18673557

  2. A Cancer Gene Selection Algorithm Based on the K-S Test and CFS.

    PubMed

    Su, Qiang; Wang, Yina; Jiang, Xiaobing; Chen, Fuxue; Lu, Wen-Cong

    2017-01-01

    To address the challenging problem of selecting distinguished genes from cancer gene expression datasets, this paper presents a gene subset selection algorithm based on the Kolmogorov-Smirnov (K-S) test and correlation-based feature selection (CFS) principles. The algorithm selects distinguished genes first using the K-S test, and then, it uses CFS to select genes from those selected by the K-S test. We adopted support vector machines (SVM) as the classification tool and used the criteria of accuracy to evaluate the performance of the classifiers on the selected gene subsets. This approach compared the proposed gene subset selection algorithm with the K-S test, CFS, minimum-redundancy maximum-relevancy (mRMR), and ReliefF algorithms. The average experimental results of the aforementioned gene selection algorithms for 5 gene expression datasets demonstrate that, based on accuracy, the performance of the new K-S and CFS-based algorithm is better than those of the K-S test, CFS, mRMR, and ReliefF algorithms. The experimental results show that the K-S test-CFS gene selection algorithm is a very effective and promising approach compared to the K-S test, CFS, mRMR, and ReliefF algorithms.

  3. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  4. Evaluation of the novel algorithm of flexible ligand docking with moveable target-protein atoms.

    PubMed

    Sulimov, Alexey V; Zheltkov, Dmitry A; Oferkin, Igor V; Kutov, Danil C; Katkova, Ekaterina V; Tyrtyshnikov, Eugene E; Sulimov, Vladimir B

    2017-01-01

    We present the novel docking algorithm based on the Tensor Train decomposition and the TT-Cross global optimization. The algorithm is applied to the docking problem with flexible ligand and moveable protein atoms. The energy of the protein-ligand complex is calculated in the frame of the MMFF94 force field in vacuum. The grid of precalculated energy potentials of probe ligand atoms in the field of the target protein atoms is not used. The energy of the protein-ligand complex for any given configuration is computed directly with the MMFF94 force field without any fitting parameters. The conformation space of the system coordinates is formed by translations and rotations of the ligand as a whole, by the ligand torsions and also by Cartesian coordinates of the selected target protein atoms. Mobility of protein and ligand atoms is taken into account in the docking process simultaneously and equally. The algorithm is realized in the novel parallel docking SOL-P program and results of its performance for a set of 30 protein-ligand complexes are presented. Dependence of the docking positioning accuracy is investigated as a function of parameters of the docking algorithm and the number of protein moveable atoms. It is shown that mobility of the protein atoms improves docking positioning accuracy. The SOL-P program is able to perform docking of a flexible ligand into the active site of the target protein with several dozens of protein moveable atoms: the native crystallized ligand pose is correctly found as the global energy minimum in the search space with 157 dimensions using 4700 CPU ∗ h at the Lomonosov supercomputer.

  5. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  6. SDSS-III Baryon Oscillation Spectroscopic Survey data release 12: Galaxy target selection and large-scale structure catalogues

    DOE PAGES

    Reid, Beth; Ho, Shirley; Padmanabhan, Nikhil; ...

    2015-11-17

    The Baryon Oscillation Spectroscopic Survey (BOSS), part of the Sloan Digital Sky Survey (SDSS) III project, has provided the largest survey of galaxy redshifts available to date, in terms of both the number of galaxy redshifts measured by a single survey, and the effective cosmological volume covered. Key to analysing the clustering of these data to provide cosmological measurements is understanding the detailed properties of this sample. Potential issues include variations in the target catalogue caused by changes either in the targeting algorithm or properties of the data used, the pattern of spectroscopic observations, the spatial distribution of targets formore » which redshifts were not obtained, and variations in the target sky density due to observational systematics. We document here the target selection algorithms used to create the galaxy samples that comprise BOSS. We also present the algorithms used to create large-scale structure catalogues for the final Data Release (DR12) samples and the associated random catalogues that quantify the survey mask. The algorithms are an evolution of those used by the BOSS team to construct catalogues from earlier data, and have been designed to accurately quantify the galaxy sample. Furthermore, the code used, designated mksample, is released with this paper.« less

  7. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm.

    PubMed

    Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C

    2005-10-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  8. An ILP based Algorithm for Optimal Customer Selection for Demand Response in SmartGrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuppannagari, Sanmukh R.; Kannan, Rajgopal; Prasanna, Viktor K.

    Demand Response (DR) events are initiated by utilities during peak demand periods to curtail consumption. They ensure system reliability and minimize the utility’s expenditure. Selection of the right customers and strategies is critical for a DR event. An effective DR scheduling algorithm minimizes the curtailment error which is the absolute difference between the achieved curtailment value and the target. State-of-the-art heuristics exist for customer selection, however their curtailment errors are unbounded and can be as high as 70%. In this work, we develop an Integer Linear Programming (ILP) formulation for optimally selecting customers and curtailment strategies that minimize the curtailmentmore » error during DR events in SmartGrids. We perform experiments on real world data obtained from the University of Southern California’s SmartGrid and show that our algorithm achieves near exact curtailment values with errors in the range of 10 -7 to 10 -5, which are within the range of numerical errors. We compare our results against the state-of-the-art heuristic being deployed in practice in the USC SmartGrid. We show that for the same set of available customer strategy pairs our algorithm performs 103 to 107 times better in terms of the curtailment errors incurred.« less

  9. Improved prediction of peptide detectability for targeted proteomics using a rank-based algorithm and organism-specific data.

    PubMed

    Qeli, Ermir; Omasits, Ulrich; Goetze, Sandra; Stekhoven, Daniel J; Frey, Juerg E; Basler, Konrad; Wollscheid, Bernd; Brunner, Erich; Ahrens, Christian H

    2014-08-28

    The in silico prediction of the best-observable "proteotypic" peptides in mass spectrometry-based workflows is a challenging problem. Being able to accurately predict such peptides would enable the informed selection of proteotypic peptides for targeted quantification of previously observed and non-observed proteins for any organism, with a significant impact for clinical proteomics and systems biology studies. Current prediction algorithms rely on physicochemical parameters in combination with positive and negative training sets to identify those peptide properties that most profoundly affect their general detectability. Here we present PeptideRank, an approach that uses learning to rank algorithm for peptide detectability prediction from shotgun proteomics data, and that eliminates the need to select a negative dataset for the training step. A large number of different peptide properties are used to train ranking models in order to predict a ranking of the best-observable peptides within a protein. Empirical evaluation with rank accuracy metrics showed that PeptideRank complements existing prediction algorithms. Our results indicate that the best performance is achieved when it is trained on organism-specific shotgun proteomics data, and that PeptideRank is most accurate for short to medium-sized and abundant proteins, without any loss in prediction accuracy for the important class of membrane proteins. Targeted proteomics approaches have been gaining a lot of momentum and hold immense potential for systems biology studies and clinical proteomics. However, since only very few complete proteomes have been reported to date, for a considerable fraction of a proteome there is no experimental proteomics evidence that would allow to guide the selection of the best-suited proteotypic peptides (PTPs), i.e. peptides that are specific to a given proteoform and that are repeatedly observed in a mass spectrometer. We describe a novel, rank-based approach for the prediction

  10. Maneuver Algorithm for Bearings-Only Target Tracking with Acceleration and Field of View Constraints

    NASA Astrophysics Data System (ADS)

    Roh, Heekun; Shim, Sang-Wook; Tahk, Min-Jea

    2018-05-01

    This paper proposes a maneuver algorithm for the agent performing target tracking with bearing angle information only. The goal of the agent is to estimate the target position and velocity based only on the bearing angle data. The methods of bearings-only target state estimation are outlined. The nature of bearings-only target tracking problem is then addressed. Based on the insight from above-mentioned properties, the maneuver algorithm for the agent is suggested. The proposed algorithm is composed of a nonlinear, hysteresis guidance law and the estimation accuracy assessment criteria based on the theory of Cramer-Rao bound. The proposed guidance law generates lateral acceleration command based on current field of view angle. The accuracy criteria supply the expected estimation variance, which acts as a terminal criterion for the proposed algorithm. The aforementioned algorithm is verified with a two-dimensional simulation.

  11. Effective traffic features selection algorithm for cyber-attacks samples

    NASA Astrophysics Data System (ADS)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  12. Aircraft target detection algorithm based on high resolution spaceborne SAR imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing

    2018-03-01

    In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.

  13. Enhanced clinical pharmacy service targeting tools: risk-predictive algorithms.

    PubMed

    El Hajji, Feras W D; Scullin, Claire; Scott, Michael G; McElnay, James C

    2015-04-01

    This study aimed to determine the value of using a mix of clinical pharmacy data and routine hospital admission spell data in the development of predictive algorithms. Exploration of risk factors in hospitalized patients, together with the targeting strategies devised, will enable the prioritization of clinical pharmacy services to optimize patient outcomes. Predictive algorithms were developed using a number of detailed steps using a 75% sample of integrated medicines management (IMM) patients, and validated using the remaining 25%. IMM patients receive targeted clinical pharmacy input throughout their hospital stay. The algorithms were applied to the validation sample, and predicted risk probability was generated for each patient from the coefficients. Risk threshold for the algorithms were determined by identifying the cut-off points of risk scores at which the algorithm would have the highest discriminative performance. Clinical pharmacy staffing levels were obtained from the pharmacy department staffing database. Numbers of previous emergency admissions and admission medicines together with age-adjusted co-morbidity and diuretic receipt formed a 12-month post-discharge and/or readmission risk algorithm. Age-adjusted co-morbidity proved to be the best index to predict mortality. Increased numbers of clinical pharmacy staff at ward level was correlated with a reduction in risk-adjusted mortality index (RAMI). Algorithms created were valid in predicting risk of in-hospital and post-discharge mortality and risk of hospital readmission 3, 6 and 12 months post-discharge. The provision of ward-based clinical pharmacy services is a key component to reducing RAMI and enabling the full benefits of pharmacy input to patient care to be realized. © 2014 John Wiley & Sons, Ltd.

  14. Deferred discrimination algorithm (nibbling) for target filter management

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John; Johnson, John L.

    1999-07-01

    A new method of classifying objects is presented. Rather than trying to form the classifier in one step or in one training algorithm, it is done in a series of small steps, or nibbles. This leads to an efficient and versatile system that is trained in series with single one-shot examples but applied in parallel, is implemented with single layer perceptrons, yet maintains its fully sequential hierarchical structure. Based on the nibbling algorithm, a basic new method of target reference filter management is described.

  15. Selection method of terrain matching area for TERCOM algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qieqie; Zhao, Long

    2017-10-01

    The performance of terrain aided navigation is closely related to the selection of terrain matching area. The different matching algorithms have different adaptability to terrain. This paper mainly studies the adaptability to terrain of TERCOM algorithm, analyze the relation between terrain feature and terrain characteristic parameters by qualitative and quantitative methods, and then research the relation between matching probability and terrain characteristic parameters by the Monte Carlo method. After that, we propose a selection method of terrain matching area for TERCOM algorithm, and verify the method correctness with real terrain data by simulation experiment. Experimental results show that the matching area obtained by the method in this paper has the good navigation performance and the matching probability of TERCOM algorithm is great than 90%

  16. Automated Target Planning for FUSE Using the SOVA Algorithm

    NASA Technical Reports Server (NTRS)

    Heatwole, Scott; Lanzi, R. James; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly

    2007-01-01

    The SOVA algorithm was originally developed under the Resilient Systems and Operations Project of the Engineering for Complex Systems Program from NASA s Aerospace Technology Enterprise as a conceptual framework to support real-time autonomous system mission and contingency management. The algorithm and its software implementation were formulated for generic application to autonomous flight vehicle systems, and its efficacy was demonstrated by simulation within the problem domain of Unmanned Aerial Vehicle autonomous flight management. The approach itself is based upon the precept that autonomous decision making for a very complex system can be made tractable by distillation of the system state to a manageable set of strategic objectives (e.g. maintain power margin, maintain mission timeline, and et cetera), which if attended to, will result in a favorable outcome. From any given starting point, the attainability of the end-states resulting from a set of candidate decisions is assessed by propagating a system model forward in time while qualitatively mapping simulated states into margins on strategic objectives using fuzzy inference systems. The expected return value of each candidate decision is evaluated as the product of the assigned value of the end-state with the assessed attainability of the end-state. The candidate decision yielding the highest expected return value is selected for implementation; thus, the approach provides a software framework for intelligent autonomous risk management. The name adopted for the technique incorporates its essential elements: Strategic Objective Valuation and Attainability (SOVA). Maximum value of the approach is realized for systems where human intervention is unavailable in the timeframe within which critical control decisions must be made. The Far Ultraviolet Spectroscopic Explorer (FUSE) satellite, launched in 1999, has been collecting science data for eight years.[1] At its beginning of life, FUSE had six gyros in two

  17. The new approach for infrared target tracking based on the particle filter algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Hang; Han, Hong-xia

    2011-08-01

    Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy

  18. Matrix Multiplication Algorithm Selection with Support Vector Machines

    DTIC Science & Technology

    2015-05-01

    libraries that could intelligently choose the optimal algorithm for a particular set of inputs. Users would be oblivious to the underlying algorithmic...SAT.” J. Artif . Intell. Res.(JAIR), vol. 32, pp. 565–606, 2008. [9] M. G. Lagoudakis and M. L. Littman, “Algorithm selection using reinforcement...Artificial Intelligence , vol. 21, no. 05, pp. 961–976, 2007. [15] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM

  19. Design and Implementation of the Automated Rendezvous Targeting Algorithms for Orion

    NASA Technical Reports Server (NTRS)

    DSouza, Christopher; Weeks, Michael

    2010-01-01

    The Orion vehicle will be designed to perform several rendezvous missions: rendezvous with the ISS in Low Earth Orbit (LEO), rendezvous with the EDS/Altair in LEO, a contingency rendezvous with the ascent stage of the Altair in Low Lunar Orbit (LLO) and a contingency rendezvous in LLO with the ascent and descent stage in the case of an aborted lunar landing. Therefore, it is not difficult to realize that each of these scenarios imposes different operational, timing, and performance constraints on the GNC system. To this end, a suite of on-board guidance and targeting algorithms have been designed to meet the requirement to perform the rendezvous independent of communications with the ground. This capability is particularly relevant for the lunar missions, some of which may occur on the far side of the moon. This paper will describe these algorithms which are designed to be structured and arranged in such a way so as to be flexible and able to safely perform a wide variety of rendezvous trajectories. The goal of the algorithms is not to merely fly one specific type of canned rendezvous profile. Conversely, it was designed from the start to be general enough such that any type of trajectory profile can be flown.(i.e. a coelliptic profile, a stable orbit rendezvous profile, and a expedited LLO rendezvous profile, etc) all using the same rendezvous suite of algorithms. Each of these profiles makes use of maneuver types which have been designed with dual goals of robustness and performance. They are designed to converge quickly under dispersed conditions and they are designed to perform many of the functions performed on the ground today. The targeting algorithms consist of a phasing maneuver (NC), an altitude adjust maneuver (NH), and plane change maneuver (NPC), a coelliptic maneuver (NSR), a Lambert targeted maneuver, and several multiple-burn targeted maneuvers which combine one of more of these algorithms. The derivation and implementation of each of these

  20. McTwo: a two-step feature selection algorithm based on maximal information coefficient.

    PubMed

    Ge, Ruiquan; Zhou, Manli; Luo, Youxi; Meng, Qinghan; Mai, Guoqin; Ma, Dongli; Wang, Guoqing; Zhou, Fengfeng

    2016-03-23

    High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This "large p, small n" paradigm in the area of biomedical "big data" may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets. This work describes a feature selection algorithm based on a recently published correlation measurement, Maximal Information Coefficient (MIC). The proposed algorithm, McTwo, aims to select features associated with phenotypes, independently of each other, and achieving high classification performance of the nearest neighbor algorithm. Based on the comparative study of 17 datasets, McTwo performs about as well as or better than existing algorithms, with significantly reduced numbers of selected features. The features selected by McTwo also appear to have particular biomedical relevance to the phenotypes from the literature. McTwo selects a feature subset with very good classification performance, as well as a small feature number. So McTwo may represent a complementary feature selection algorithm for the high-dimensional biomedical datasets.

  1. SETI target selection.

    PubMed

    Latham, D W; Soderblom, D R

    1995-01-01

    The NASA High Resolution Microwave Survey consists of two complementary elements: a Sky Survey of the entire sky to a moderate level of sensitivity; and a Targeted Search of nearby stars, one at a time, to a much deeper level of sensitivity. In this paper we propose strategies for target selection. We have two goals: to improve the chances of successful detection of signals from technical civilizations that inhabit planets around solar-type stars, and to minimize the chances of missing signals from unexpected sites. For the main Targeted Search survey of approximately 1000 nearby solar-type stars, we argue that the selection criteria should be heavily biased by what we know about the origin and evolution of life here on Earth. We propose that observations of stars with stellar companions orbiting near the habitable zone should be de-emphasized, because such companions would prevent the formation of habitable planets. We also propose that observations of stars younger than about three billion years should be de-emphasized in favor of older stars, because our own technical civilization took longer than three billion years to evolve here on Earth. To provide the information needed for the preparation of specific target lists, we have undertaken an inventory of a large sample of solar-type stars out to a distance of 60 pc, with the goal of characterizing the relevant astrophysical properties of these stars, especially their ages and companionship. To complement the main survey, we propose that a modest sample of the nearest stars should be observed without any selection biases whatsoever. Finally, we argue that efforts to identify stars with planetary systems should be expanded. If found, such systems should receive intensive scrutiny.

  2. Compact cancer biomarkers discovery using a swarm intelligence feature selection algorithm.

    PubMed

    Martinez, Emmanuel; Alvarez, Mario Moises; Trevino, Victor

    2010-08-01

    Biomarker discovery is a typical application from functional genomics. Due to the large number of genes studied simultaneously in microarray data, feature selection is a key step. Swarm intelligence has emerged as a solution for the feature selection problem. However, swarm intelligence settings for feature selection fail to select small features subsets. We have proposed a swarm intelligence feature selection algorithm based on the initialization and update of only a subset of particles in the swarm. In this study, we tested our algorithm in 11 microarray datasets for brain, leukemia, lung, prostate, and others. We show that the proposed swarm intelligence algorithm successfully increase the classification accuracy and decrease the number of selected features compared to other swarm intelligence methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  4. Comparison of human observer and algorithmic target detection in nonurban forward-looking infrared imagery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.

    2005-07-01

    We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.

  5. Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan

    2015-01-01

    Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.

  6. The Time-domain Spectroscopic Survey: Target Selection for Repeat Spectroscopy

    NASA Astrophysics Data System (ADS)

    MacLeod, Chelsea L.; Green, Paul J.; Anderson, Scott F.; Eracleous, Michael; Ruan, John J.; Runnoe, Jessie; Nielsen Brandt, William; Badenes, Carles; Greene, Jenny; Morganson, Eric; Schmidt, Sarah J.; Schwope, Axel; Shen, Yue; Amaro, Rachael; Lebleu, Amy; Filiz Ak, Nurten; Grier, Catherine J.; Hoover, Daniel; McGraw, Sean M.; Dawson, Kyle; Hall, Patrick B.; Hawley, Suzanne L.; Mariappan, Vivek; Myers, Adam D.; Pâris, Isabelle; Schneider, Donald P.; Stassun, Keivan G.; Bershady, Matthew A.; Blanton, Michael R.; Seo, Hee-Jong; Tinker, Jeremy; Fernández-Trincado, J. G.; Chambers, Kenneth; Kaiser, Nick; Kudritzki, R.-P.; Magnier, Eugene; Metcalfe, Nigel; Waters, Chris Z.

    2018-01-01

    As astronomers increasingly exploit the information available in the time domain, spectroscopic variability in particular opens broad new channels of investigation. Here we describe the selection algorithms for all targets intended for repeat spectroscopy in the Time Domain Spectroscopic Survey (TDSS), part of the extended Baryon Oscillation Spectroscopic Survey within the Sloan Digital Sky Survey (SDSS)-IV. Also discussed are the scientific rationale and technical constraints leading to these target selections. The TDSS includes a large “repeat quasar spectroscopy” (RQS) program delivering ∼13,000 repeat spectra of confirmed SDSS quasars, and several smaller “few-epoch spectroscopy” (FES) programs targeting specific classes of quasars as well as stars. The RQS program aims to provide a large and diverse quasar data set for studying variations in quasar spectra on timescales of years, a comparison sample for the FES quasar programs, and an opportunity for discovering rare, serendipitous events. The FES programs cover a wide variety of phenomena in both quasars and stars. Quasar FES programs target broad absorption line quasars, high signal-to-noise ratio normal broad line quasars, quasars with double-peaked or very asymmetric broad emission line profiles, binary supermassive black hole candidates, and the most photometrically variable quasars. Strongly variable stars are also targeted for repeat spectroscopy, encompassing many types of eclipsing binary systems, and classical pulsators like RR Lyrae. Other stellar FES programs allow spectroscopic variability studies of active ultracool dwarf stars, dwarf carbon stars, and white dwarf/M dwarf spectroscopic binaries. We present example TDSS spectra and describe anticipated sample sizes and results.

  7. Clustering analysis of moving target signatures

    NASA Astrophysics Data System (ADS)

    Martone, Anthony; Ranney, Kenneth; Innocenti, Roberto

    2010-04-01

    Previously, we developed a moving target indication (MTI) processing approach to detect and track slow-moving targets inside buildings, which successfully detected moving targets (MTs) from data collected by a low-frequency, ultra-wideband radar. Our MTI algorithms include change detection, automatic target detection (ATD), clustering, and tracking. The MTI algorithms can be implemented in a real-time or near-real-time system; however, a person-in-the-loop is needed to select input parameters for the clustering algorithm. Specifically, the number of clusters to input into the cluster algorithm is unknown and requires manual selection. A critical need exists to automate all aspects of the MTI processing formulation. In this paper, we investigate two techniques that automatically determine the number of clusters: the adaptive knee-point (KP) algorithm and the recursive pixel finding (RPF) algorithm. The KP algorithm is based on a well-known heuristic approach for determining the number of clusters. The RPF algorithm is analogous to the image processing, pixel labeling procedure. Both algorithms are used to analyze the false alarm and detection rates of three operational scenarios of personnel walking inside wood and cinderblock buildings.

  8. Tag SNP selection via a genetic algorithm.

    PubMed

    Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh

    2010-10-01

    Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.

  9. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng

    2017-01-01

    Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ-connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm. PMID:28587084

  10. A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors.

    PubMed

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng

    2017-05-25

    Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ -connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm.

  11. Open-source chemogenomic data-driven algorithms for predicting drug-target interactions.

    PubMed

    Hao, Ming; Bryant, Stephen H; Wang, Yanli

    2018-02-06

    While novel technologies such as high-throughput screening have advanced together with significant investment by pharmaceutical companies during the past decades, the success rate for drug development has not yet been improved prompting researchers looking for new strategies of drug discovery. Drug repositioning is a potential approach to solve this dilemma. However, experimental identification and validation of potential drug targets encoded by the human genome is both costly and time-consuming. Therefore, effective computational approaches have been proposed to facilitate drug repositioning, which have proved to be successful in drug discovery. Doubtlessly, the availability of open-accessible data from basic chemical biology research and the success of human genome sequencing are crucial to develop effective in silico drug repositioning methods allowing the identification of potential targets for existing drugs. In this work, we review several chemogenomic data-driven computational algorithms with source codes publicly accessible for predicting drug-target interactions (DTIs). We organize these algorithms by model properties and model evolutionary relationships. We re-implemented five representative algorithms in R programming language, and compared these algorithms by means of mean percentile ranking, a new recall-based evaluation metric in the DTI prediction research field. We anticipate that this review will be objective and helpful to researchers who would like to further improve existing algorithms or need to choose appropriate algorithms to infer potential DTIs in the projects. The source codes for DTI predictions are available at: https://github.com/minghao2016/chemogenomicAlg4DTIpred. Published by Oxford University Press 2018. This work is written by US Government employees and is in the public domain in the US.

  12. FSMRank: feature selection algorithm for learning to rank.

    PubMed

    Lai, Han-Jiang; Pan, Yan; Tang, Yong; Yu, Rong

    2013-06-01

    In recent years, there has been growing interest in learning to rank. The introduction of feature selection into different learning problems has been proven effective. These facts motivate us to investigate the problem of feature selection for learning to rank. We propose a joint convex optimization formulation which minimizes ranking errors while simultaneously conducting feature selection. This optimization formulation provides a flexible framework in which we can easily incorporate various importance measures and similarity measures of the features. To solve this optimization problem, we use the Nesterov's approach to derive an accelerated gradient algorithm with a fast convergence rate O(1/T(2)). We further develop a generalization bound for the proposed optimization problem using the Rademacher complexities. Extensive experimental evaluations are conducted on the public LETOR benchmark datasets. The results demonstrate that the proposed method shows: 1) significant ranking performance gain compared to several feature selection baselines for ranking, and 2) very competitive performance compared to several state-of-the-art learning-to-rank algorithms.

  13. Gene selection heuristic algorithm for nutrigenomics studies.

    PubMed

    Valour, D; Hue, I; Grimard, B; Valour, B

    2013-07-15

    Large datasets from -omics studies need to be deeply investigated. The aim of this paper is to provide a new method (LEM method) for the search of transcriptome and metabolome connections. The heuristic algorithm here described extends the classical canonical correlation analysis (CCA) to a high number of variables (without regularization) and combines well-conditioning and fast-computing in "R." Reduced CCA models are summarized in PageRank matrices, the product of which gives a stochastic matrix that resumes the self-avoiding walk covered by the algorithm. Then, a homogeneous Markov process applied to this stochastic matrix converges the probabilities of interconnection between genes, providing a selection of disjointed subsets of genes. This is an alternative to regularized generalized CCA for the determination of blocks within the structure matrix. Each gene subset is thus linked to the whole metabolic or clinical dataset that represents the biological phenotype of interest. Moreover, this selection process reaches the aim of biologists who often need small sets of genes for further validation or extended phenotyping. The algorithm is shown to work efficiently on three published datasets, resulting in meaningfully broadened gene networks.

  14. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  15. Hybrid feature selection algorithm using symmetrical uncertainty and a harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Salameh Shreem, Salam; Abdullah, Salwani; Nazri, Mohd Zakree Ahmad

    2016-04-01

    Microarray technology can be used as an efficient diagnostic system to recognise diseases such as tumours or to discriminate between different types of cancers in normal tissues. This technology has received increasing attention from the bioinformatics community because of its potential in designing powerful decision-making tools for cancer diagnosis. However, the presence of thousands or tens of thousands of genes affects the predictive accuracy of this technology from the perspective of classification. Thus, a key issue in microarray data is identifying or selecting the smallest possible set of genes from the input data that can achieve good predictive accuracy for classification. In this work, we propose a two-stage selection algorithm for gene selection problems in microarray data-sets called the symmetrical uncertainty filter and harmony search algorithm wrapper (SU-HSA). Experimental results show that the SU-HSA is better than HSA in isolation for all data-sets in terms of the accuracy and achieves a lower number of genes on 6 out of 10 instances. Furthermore, the comparison with state-of-the-art methods shows that our proposed approach is able to obtain 5 (out of 10) new best results in terms of the number of selected genes and competitive results in terms of the classification accuracy.

  16. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  17. Algorithms for selecting informative marker panels for population assignment.

    PubMed

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  18. A study of metaheuristic algorithms for high dimensional feature selection on microarray data

    NASA Astrophysics Data System (ADS)

    Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna

    2017-11-01

    Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.

  19. Improving angular resolution with Scan-MUSIC algorithm for real complex targets using 35-GHz millimeter-wave radar

    NASA Astrophysics Data System (ADS)

    Ly, Canh

    2004-08-01

    Scan-MUSIC algorithm, developed by the U.S. Army Research Laboratory (ARL), improves angular resolution for target detection with the use of a single rotatable radar scanning the angular region of interest. This algorithm has been adapted and extended from the MUSIC algorithm that has been used for a linear sensor array. Previously, it was shown that the SMUSIC algorithm and a Millimeter Wave radar can be used to resolve two closely spaced point targets that exhibited constructive interference, but not for the targets that exhibited destructive interference. Therefore, there were some limitations of the algorithm for the point targets. In this paper, the SMUSIC algorithm is applied to a problem of resolving real complex scatterer-type targets, which is more useful and of greater practical interest, particular for the future Army radar system. The paper presents results of the angular resolution of the targets, an M60 tank and an M113 Armored Personnel Carrier (APC), that are within the mainlobe of a Κα-band radar antenna. In particular, we applied the algorithm to resolve centroids of the targets that were placed within the beamwidth of the antenna. The collected coherent data using the stepped-frequency radar were compute magnitudely for the SMUSIC calculation. Even though there were significantly different signal returns for different orientations and offsets of the two targets, we resolved those two target centroids when they were as close as about 1/3 of the antenna beamwidth.

  20. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    NASA Astrophysics Data System (ADS)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  1. A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.

    PubMed

    Ni, Qianwu; Chen, Lei

    2017-01-01

    Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  2. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    PubMed

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. General Quantum Meet-in-the-Middle Search Algorithm Based on Target Solution of Fixed Weight

    NASA Astrophysics Data System (ADS)

    Fu, Xiang-Qun; Bao, Wan-Su; Wang, Xiang; Shi, Jian-Hong

    2016-10-01

    Similar to the classical meet-in-the-middle algorithm, the storage and computation complexity are the key factors that decide the efficiency of the quantum meet-in-the-middle algorithm. Aiming at the target vector of fixed weight, based on the quantum meet-in-the-middle algorithm, the algorithm for searching all n-product vectors with the same weight is presented, whose complexity is better than the exhaustive search algorithm. And the algorithm can reduce the storage complexity of the quantum meet-in-the-middle search algorithm. Then based on the algorithm and the knapsack vector of the Chor-Rivest public-key crypto of fixed weight d, we present a general quantum meet-in-the-middle search algorithm based on the target solution of fixed weight, whose computational complexity is \\sumj = 0d {(O(\\sqrt {Cn - k + 1d - j }) + O(C_kj log C_k^j))} with Σd i =0 Ck i memory cost. And the optimal value of k is given. Compared to the quantum meet-in-the-middle search algorithm for knapsack problem and the quantum algorithm for searching a target solution of fixed weight, the computational complexity of the algorithm is lower. And its storage complexity is smaller than the quantum meet-in-the-middle-algorithm. Supported by the National Basic Research Program of China under Grant No. 2013CB338002 and the National Natural Science Foundation of China under Grant No. 61502526

  4. An Efficient Moving Target Detection Algorithm Based on Sparsity-Aware Spectrum Estimation

    PubMed Central

    Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin

    2014-01-01

    In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results. PMID:25222035

  5. A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng

    2009-11-01

    Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.

  6. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.

  7. Toward Optimal Target Placement for Neural Prosthetic Devices

    PubMed Central

    Cunningham, John P.; Yu, Byron M.; Gilja, Vikash; Ryu, Stephen I.; Shenoy, Krishna V.

    2008-01-01

    Neural prosthetic systems have been designed to estimate continuous reach trajectories (motor prostheses) and to predict discrete reach targets (communication prostheses). In the latter case, reach targets are typically decoded from neural spiking activity during an instructed delay period before the reach begins. Such systems use targets placed in radially symmetric geometries independent of the tuning properties of the neurons available. Here we seek to automate the target placement process and increase decode accuracy in communication prostheses by selecting target locations based on the neural population at hand. Motor prostheses that incorporate intended target information could also benefit from this consideration. We present an optimal target placement algorithm that approximately maximizes decode accuracy with respect to target locations. In simulated neural spiking data fit from two monkeys, the optimal target placement algorithm yielded statistically significant improvements up to 8 and 9% for two and sixteen targets, respectively. For four and eight targets, gains were more modest, as the target layouts found by the algorithm closely resembled the canonical layouts. We trained a monkey in this paradigm and tested the algorithm with experimental neural data to confirm some of the results found in simulation. In all, the algorithm can serve not only to create new target layouts that outperform canonical layouts, but it can also confirm or help select among multiple canonical layouts. The optimal target placement algorithm developed here is the first algorithm of its kind, and it should both improve decode accuracy and help automate target placement for neural prostheses. PMID:18829845

  8. Implementation of a sensor guided flight algorithm for target tracking by small UAS

    NASA Astrophysics Data System (ADS)

    Collins, Gaemus E.; Stankevitz, Chris; Liese, Jeffrey

    2011-06-01

    Small xed-wing UAS (SUAS) such as Raven and Unicorn have limited power, speed, and maneuverability. Their missions can be dramatically hindered by environmental conditions (wind, terrain), obstructions (buildings, trees) blocking clear line of sight to a target, and/or sensor hardware limitations (xed stare, limited gimbal motion, lack of zoom). Toyon's Sensor Guided Flight (SGF) algorithm was designed to account for SUAS hardware shortcomings and enable long-term tracking of maneuvering targets by maintaining persistent eyes-on-target. SGF was successfully tested in simulation with high-delity UAS, sensor, and environment models, but real- world ight testing with 60 Unicorn UAS revealed surprising second order challenges that were not highlighted by the simulations. This paper describes the SGF algorithm, our rst round simulation results, our second order discoveries from ight testing, and subsequent improvements that were made to the algorithm.

  9. Nuclease Target Site Selection for Maximizing On-target Activity and Minimizing Off-target Effects in Genome Editing

    PubMed Central

    Lee, Ciaran M; Cradick, Thomas J; Fine, Eli J; Bao, Gang

    2016-01-01

    The rapid advancement in targeted genome editing using engineered nucleases such as ZFNs, TALENs, and CRISPR/Cas9 systems has resulted in a suite of powerful methods that allows researchers to target any genomic locus of interest. A complementary set of design tools has been developed to aid researchers with nuclease design, target site selection, and experimental validation. Here, we review the various tools available for target selection in designing engineered nucleases, and for quantifying nuclease activity and specificity, including web-based search tools and experimental methods. We also elucidate challenges in target selection, especially in predicting off-target effects, and discuss future directions in precision genome editing and its applications. PMID:26750397

  10. A robust close-range photogrammetric target extraction algorithm for size and type variant targets

    NASA Astrophysics Data System (ADS)

    Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert

    2016-05-01

    The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.

  11. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    PubMed

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2018-04-01

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  12. Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Rohaidah; Yusop, Nurhafizah Moziyana Mohd; Bakar, Azuraliza Abu; Yaakub, Mohd Ridzwan

    2017-10-01

    This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.

  13. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  14. Modified Bat Algorithm for Feature Selection with the Wisconsin Diagnosis Breast Cancer (WDBC) Dataset

    PubMed

    Jeyasingh, Suganthi; Veluchamy, Malathi

    2017-05-01

    Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License

  15. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    PubMed Central

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  16. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    PubMed

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-07-19

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  17. A motion algorithm to extract physical and motion parameters of mobile targets from cone-beam computed tomographic images.

    PubMed

    Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad

    2016-05-17

    A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern

  18. A robust algorithm for automated target recognition using precomputed radar cross sections

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2004-09-01

    Passive radar is an emerging technology that offers a number of unique benefits, including covert operation. Many such systems are already capable of detecting and tracking aircraft. The goal of this work is to develop a robust algorithm for adding automated target recognition (ATR) capabilities to existing passive radar systems. In previous papers, we proposed conducting ATR by comparing the precomputed RCS of known targets to that of detected targets. To make the precomputed RCS as accurate as possible, a coordinated flight model is used to estimate aircraft orientation. Once the aircraft's position and orientation are known, it is possible to determine the incident and observed angles on the aircraft, relative to the transmitter and receiver. This makes it possible to extract the appropriate radar cross section (RCS) from our simulated database. This RCS is then scaled to account for propagation losses and the receiver's antenna gain. A Rician likelihood model compares these expected signals from different targets to the received target profile. We have previously employed Monte Carlo runs to gauge the probability of error in the ATR algorithm; however, generation of a statistically significant set of Monte Carlo runs is computationally intensive. As an alternative to Monte Carlo runs, we derive the relative entropy (also known as Kullback-Liebler distance) between two Rician distributions. Since the probability of Type II error in our hypothesis testing problem can be expressed as a function of the relative entropy via Stein's Lemma, this provides us with a computationally efficient method for determining an upper bound on our algorithm's performance. It also provides great insight into the types of classification errors we can expect from our algorithm. This paper compares the numerically approximated probability of Type II error with the results obtained from a set of Monte Carlo runs.

  19. TargetSpy: a supervised machine learning approach for microRNA target prediction.

    PubMed

    Sturm, Martin; Hackenberg, Michael; Langenberger, David; Frishman, Dmitrij

    2010-05-28

    Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila

  20. TargetSpy: a supervised machine learning approach for microRNA target prediction

    PubMed Central

    2010-01-01

    Background Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. Results We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences. In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Conclusion Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well

  1. A Target Coverage Scheduling Scheme Based on Genetic Algorithms in Directional Sensor Networks

    PubMed Central

    Gil, Joon-Min; Han, Youn-Hee

    2011-01-01

    As a promising tool for monitoring the physical world, directional sensor networks (DSNs) consisting of a large number of directional sensors are attracting increasing attention. As directional sensors in DSNs have limited battery power and restricted angles of sensing range, maximizing the network lifetime while monitoring all the targets in a given area remains a challenge. A major technique to conserve the energy of directional sensors is to use a node wake-up scheduling protocol by which some sensors remain active to provide sensing services, while the others are inactive to conserve their energy. In this paper, we first address a Maximum Set Covers for DSNs (MSCD) problem, which is known to be NP-complete, and present a greedy algorithm-based target coverage scheduling scheme that can solve this problem by heuristics. This scheme is used as a baseline for comparison. We then propose a target coverage scheduling scheme based on a genetic algorithm that can find the optimal cover sets to extend the network lifetime while monitoring all targets by the evolutionary global search technique. To verify and evaluate these schemes, we conducted simulations and showed that the schemes can contribute to extending the network lifetime. Simulation results indicated that the genetic algorithm-based scheduling scheme had better performance than the greedy algorithm-based scheme in terms of maximizing network lifetime. PMID:22319387

  2. [MicroRNA Target Prediction Based on Support Vector Machine Ensemble Classification Algorithm of Under-sampling Technique].

    PubMed

    Chen, Zhiru; Hong, Wenxue

    2016-02-01

    Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier.

  3. On the Impact of Localization and Density Control Algorithms in Target Tracking Applications for Wireless Sensor Networks

    PubMed Central

    Campos, Andre N.; Souza, Efren L.; Nakamura, Fabiola G.; Nakamura, Eduardo F.; Rodrigues, Joel J. P. C.

    2012-01-01

    Target tracking is an important application of wireless sensor networks. The networks' ability to locate and track an object is directed linked to the nodes' ability to locate themselves. Consequently, localization systems are essential for target tracking applications. In addition, sensor networks are often deployed in remote or hostile environments. Therefore, density control algorithms are used to increase network lifetime while maintaining its sensing capabilities. In this work, we analyze the impact of localization algorithms (RPE and DPE) and density control algorithms (GAF, A3 and OGDC) on target tracking applications. We adapt the density control algorithms to address the k-coverage problem. In addition, we analyze the impact of network density, residual integration with density control, and k-coverage on both target tracking accuracy and network lifetime. Our results show that DPE is a better choice for target tracking applications than RPE. Moreover, among the evaluated density control algorithms, OGDC is the best option among the three. Although the choice of the density control algorithm has little impact on the tracking precision, OGDC outperforms GAF and A3 in terms of tracking time. PMID:22969329

  4. Airborne target tracking algorithm against oppressive decoys in infrared imagery

    NASA Astrophysics Data System (ADS)

    Sun, Xiechang; Zhang, Tianxu

    2009-10-01

    This paper presents an approach for tracking airborne target against oppressive infrared decoys. Oppressive decoy lures infrared guided missile by its high infrared radiation. Traditional tracking algorithms have degraded stability even come to tracking failure when airborne target continuously throw out many decoys. The proposed approach first determines an adaptive tracking window. The center of the tracking window is set at a predicted target position which is computed based on uniform motion model. Different strategies are applied for determination of tracking window size according to target state. The image within tracking window is segmented and multi features of candidate targets are extracted. The most similar candidate target is associated to the tracking target by using a decision function, which calculates a weighted sum of normalized feature differences between two comparable targets. Integrated intensity ratio of association target and tracking target, and target centroid are examined to estimate target state in the presence of decoys. The tracking ability and robustness of proposed approach has been validated by processing available real-world and simulated infrared image sequences containing airborne targets and oppressive decoys.

  5. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  6. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    PubMed Central

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895

  7. Assessing the performance of a covert automatic target recognition algorithm

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2005-05-01

    Passive radar systems exploit illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. Doing so allows them to operate covertly and inexpensively. Our research seeks to enhance passive radar systems by adding automatic target recognition (ATR) capabilities. In previous papers we proposed conducting ATR by comparing the radar cross section (RCS) of aircraft detected by a passive radar system to the precomputed RCS of aircraft in the target class. To effectively model the low-frequency setting, the comparison is made via a Rician likelihood model. Monte Carlo simulations indicate that the approach is viable. This paper builds on that work by developing a method for quickly assessing the potential performance of the ATR algorithm without using exhaustive Monte Carlo trials. This method exploits the relation between the probability of error in a binary hypothesis test under the Bayesian framework to the Chernoff information. Since the data are well-modeled as Rician, we begin by deriving a closed-form approximation for the Chernoff information between two Rician densities. This leads to an approximation for the probability of error in the classification algorithm that is a function of the number of available measurements. We conclude with an application that would be particularly cumbersome to accomplish via Monte Carlo trials, but that can be quickly addressed using the Chernoff information approach. This application evaluates the length of time that an aircraft must be tracked before the probability of error in the ATR algorithm drops below a desired threshold.

  8. Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR

    PubMed Central

    MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali

    2017-01-01

    Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms. PMID:28979308

  9. Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR.

    PubMed

    MotieGhader, Habib; Gharaghani, Sajjad; Masoudi-Sobhanzadeh, Yosef; Masoudi-Nejad, Ali

    2017-01-01

    Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as GA, PSO, ACO and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR feature selection are proposed. SGALA algorithm uses advantages of Genetic algorithm and Learning Automata sequentially and the MGALA algorithm uses advantages of Genetic Algorithm and Learning Automata simultaneously. We applied our proposed algorithms to select the minimum possible number of features from three different datasets and also we observed that the MGALA and SGALA algorithms had the best outcome independently and in average compared to other feature selection algorithms. Through comparison of our proposed algorithms, we deduced that the rate of convergence to optimal result in MGALA and SGALA algorithms were better than the rate of GA, ACO, PSO and LA algorithms. In the end, the results of GA, ACO, PSO, LA, SGALA, and MGALA algorithms were applied as the input of LS-SVR model and the results from LS-SVR models showed that the LS-SVR model had more predictive ability with the input from SGALA and MGALA algorithms than the input from all other mentioned algorithms. Therefore, the results have corroborated that not only is the predictive efficiency of proposed algorithms better, but their rate of convergence is also superior to the all other mentioned algorithms.

  10. MESSI: metabolic engineering target selection and best strain identification tool.

    PubMed

    Kang, Kang; Li, Jun; Lim, Boon Leong; Panagiotou, Gianni

    2015-01-01

    Metabolic engineering and synthetic biology are synergistically related fields for manipulating target pathways and designing microorganisms that can act as chemical factories. Saccharomyces cerevisiae's ideal bioprocessing traits make yeast a very attractive chemical factory for production of fuels, pharmaceuticals, nutraceuticals as well as a wide range of chemicals. However, future attempts of engineering S. cerevisiae's metabolism using synthetic biology need to move towards more integrative models that incorporate the high connectivity of metabolic pathways and regulatory processes and the interactions in genetic elements across those pathways and processes. To contribute in this direction, we have developed Metabolic Engineering target Selection and best Strain Identification tool (MESSI), a web server for predicting efficient chassis and regulatory components for yeast bio-based production. The server provides an integrative platform for users to analyse ready-to-use public high-throughput metabolomic data, which are transformed to metabolic pathway activities for identifying the most efficient S. cerevisiae strain for the production of a compound of interest. As input MESSI accepts metabolite KEGG IDs or pathway names. MESSI outputs a ranked list of S. cerevisiae strains based on aggregation algorithms. Furthermore, through a genome-wide association study of the metabolic pathway activities with the strains' natural variation, MESSI prioritizes genes and small variants as potential regulatory points and promising metabolic engineering targets. Users can choose various parameters in the whole process such as (i) weight and expectation of each metabolic pathway activity in the final ranking of the strains, (ii) Weighted AddScore Fuse or Weighted Borda Fuse aggregation algorithm, (iii) type of variants to be included, (iv) variant sets in different biological levels.Database URL: http://sbb.hku.hk/MESSI/. © The Author(s) 2015. Published by Oxford University

  11. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection

    PubMed Central

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection. PMID:23772792

  12. A Parameter Subset Selection Algorithm for Mixed-Effects Models

    DOE PAGES

    Schmidt, Kathleen L.; Smith, Ralph C.

    2016-01-01

    Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less

  13. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.

    PubMed

    Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J

    2016-03-01

    The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For

  14. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  15. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  16. [Combining speech sample and feature bilateral selection algorithm for classification of Parkinson's disease].

    PubMed

    Zhang, Xiaoheng; Wang, Lirui; Cao, Yao; Wang, Pin; Zhang, Cheng; Yang, Liuyang; Li, Yongming; Zhang, Yanling; Cheng, Oumei

    2018-02-01

    Diagnosis of Parkinson's disease (PD) based on speech data has been proved to be an effective way in recent years. However, current researches just care about the feature extraction and classifier design, and do not consider the instance selection. Former research by authors showed that the instance selection can lead to improvement on classification accuracy. However, no attention is paid on the relationship between speech sample and feature until now. Therefore, a new diagnosis algorithm of PD is proposed in this paper by simultaneously selecting speech sample and feature based on relevant feature weighting algorithm and multiple kernel method, so as to find their synergy effects, thereby improving classification accuracy. Experimental results showed that this proposed algorithm obtained apparent improvement on classification accuracy. It can obtain mean classification accuracy of 82.5%, which was 30.5% higher than the relevant algorithm. Besides, the proposed algorithm detected the synergy effects of speech sample and feature, which is valuable for speech marker extraction.

  17. Threshold-selecting strategy for best possible ground state detection with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Lässig, Jörg; Hoffmann, Karl Heinz

    2009-04-01

    Genetic algorithms are a standard heuristic to find states of low energy in complex state spaces as given by physical systems such as spin glasses but also in combinatorial optimization. The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover. Many schemes have been considered in literature as possible crossover selection strategies. We show for a large class of quality measures that the best possible probability distribution for selecting individuals in each generation of the algorithm execution is a rectangular distribution over the individuals sorted by their energy values. This means uniform probabilities have to be assigned to a group of the individuals with lowest energy in the population but probabilities equal to zero to individuals which are corresponding to energy values higher than a fixed cutoff, which is equal to a certain rank in the vector sorted by the energy of the states in the current population. The considered strategy is dubbed threshold selecting. The proof applies basic arguments of Markov chains and linear optimization and makes only a few assumptions on the underlying principles and hence applies to a large class of algorithms.

  18. A Plane Target Detection Algorithm in Remote Sensing Images based on Deep Learning Network Technology

    NASA Astrophysics Data System (ADS)

    Shuxin, Li; Zhilong, Zhang; Biao, Li

    2018-01-01

    Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.

  19. A bio-inspired swarm robot coordination algorithm for multiple target searching

    NASA Astrophysics Data System (ADS)

    Meng, Yan; Gan, Jing; Desai, Sachi

    2008-04-01

    The coordination of a multi-robot system searching for multi targets is challenging under dynamic environment since the multi-robot system demands group coherence (agents need to have the incentive to work together faithfully) and group competence (agents need to know how to work together well). In our previous proposed bio-inspired coordination method, Local Interaction through Virtual Stigmergy (LIVS), one problem is the considerable randomness of the robot movement during coordination, which may lead to more power consumption and longer searching time. To address these issues, an adaptive LIVS (ALIVS) method is proposed in this paper, which not only considers the travel cost and target weight, but also predicting the target/robot ratio and potential robot redundancy with respect to the detected targets. Furthermore, a dynamic weight adjustment is also applied to improve the searching performance. This new method a truly distributed method where each robot makes its own decision based on its local sensing information and the information from its neighbors. Basically, each robot only communicates with its neighbors through a virtual stigmergy mechanism and makes its local movement decision based on a Particle Swarm Optimization (PSO) algorithm. The proposed ALIVS algorithm has been implemented on the embodied robot simulator, Player/Stage, in a searching target. The simulation results demonstrate the efficiency and robustness in a power-efficient manner with the real-world constraints.

  20. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data

    PubMed Central

    Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J.

    2016-01-01

    Abstract Background: The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. Methods: We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women’s Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms—one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV—using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this “triangulation.” Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. Results: The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Conclusions: Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. PMID:26582243

  1. Optimized hyperspectral band selection using hybrid genetic algorithm and gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie

    2015-12-01

    The serious information redundancy in hyperspectral images (HIs) cannot contribute to the data analysis accuracy, instead it require expensive computational resources. Consequently, to identify the most useful and valuable information from the HIs, thereby improve the accuracy of data analysis, this paper proposed a novel hyperspectral band selection method using the hybrid genetic algorithm and gravitational search algorithm (GA-GSA). In the proposed method, the GA-GSA is mapped to the binary space at first. Then, the accuracy of the support vector machine (SVM) classifier and the number of selected spectral bands are utilized to measure the discriminative capability of the band subset. Finally, the band subset with the smallest number of spectral bands as well as covers the most useful and valuable information is obtained. To verify the effectiveness of the proposed method, studies conducted on an AVIRIS image against two recently proposed state-of-the-art GSA variants are presented. The experimental results revealed the superiority of the proposed method and indicated that the method can indeed considerably reduce data storage costs and efficiently identify the band subset with stable and high classification precision.

  2. Improving permafrost distribution modelling using feature selection algorithms

    NASA Astrophysics Data System (ADS)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  3. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  4. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems.

    PubMed

    de Paula, Lauro C M; Soares, Anderson S; de Lima, Telma W; Delbem, Alexandre C B; Coelho, Clarimar J; Filho, Arlindo R G

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation.

  5. Target and Tissue Selectivity Prediction by Integrated Mechanistic Pharmacokinetic-Target Binding and Quantitative Structure Activity Modeling.

    PubMed

    Vlot, Anna H C; de Witte, Wilhelmus E A; Danhof, Meindert; van der Graaf, Piet H; van Westen, Gerard J P; de Lange, Elizabeth C M

    2017-12-04

    Selectivity is an important attribute of effective and safe drugs, and prediction of in vivo target and tissue selectivity would likely improve drug development success rates. However, a lack of understanding of the underlying (pharmacological) mechanisms and availability of directly applicable predictive methods complicates the prediction of selectivity. We explore the value of combining physiologically based pharmacokinetic (PBPK) modeling with quantitative structure-activity relationship (QSAR) modeling to predict the influence of the target dissociation constant (K D ) and the target dissociation rate constant on target and tissue selectivity. The K D values of CB1 ligands in the ChEMBL database are predicted by QSAR random forest (RF) modeling for the CB1 receptor and known off-targets (TRPV1, mGlu5, 5-HT1a). Of these CB1 ligands, rimonabant, CP-55940, and Δ 8 -tetrahydrocanabinol, one of the active ingredients of cannabis, were selected for simulations of target occupancy for CB1, TRPV1, mGlu5, and 5-HT1a in three brain regions, to illustrate the principles of the combined PBPK-QSAR modeling. Our combined PBPK and target binding modeling demonstrated that the optimal values of the K D and k off for target and tissue selectivity were dependent on target concentration and tissue distribution kinetics. Interestingly, if the target concentration is high and the perfusion of the target site is low, the optimal K D value is often not the lowest K D value, suggesting that optimization towards high drug-target affinity can decrease the benefit-risk ratio. The presented integrative structure-pharmacokinetic-pharmacodynamic modeling provides an improved understanding of tissue and target selectivity.

  6. A Feature Selection Algorithm to Compute Gene Centric Methylation from Probe Level Methylation Data.

    PubMed

    Baur, Brittany; Bozdag, Serdar

    2016-01-01

    DNA methylation is an important epigenetic event that effects gene expression during development and various diseases such as cancer. Understanding the mechanism of action of DNA methylation is important for downstream analysis. In the Illumina Infinium HumanMethylation 450K array, there are tens of probes associated with each gene. Given methylation intensities of all these probes, it is necessary to compute which of these probes are most representative of the gene centric methylation level. In this study, we developed a feature selection algorithm based on sequential forward selection that utilized different classification methods to compute gene centric DNA methylation using probe level DNA methylation data. We compared our algorithm to other feature selection algorithms such as support vector machines with recursive feature elimination, genetic algorithms and ReliefF. We evaluated all methods based on the predictive power of selected probes on their mRNA expression levels and found that a K-Nearest Neighbors classification using the sequential forward selection algorithm performed better than other algorithms based on all metrics. We also observed that transcriptional activities of certain genes were more sensitive to DNA methylation changes than transcriptional activities of other genes. Our algorithm was able to predict the expression of those genes with high accuracy using only DNA methylation data. Our results also showed that those DNA methylation-sensitive genes were enriched in Gene Ontology terms related to the regulation of various biological processes.

  7. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    PubMed

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  9. Automatic Target Recognition: Statistical Feature Selection of Non-Gaussian Distributed Target Classes

    DTIC Science & Technology

    2011-06-01

    implementing, and evaluating many feature selection algorithms. Mucciardi and Gose compared seven different techniques for choosing subsets of pattern...122 THIS PAGE INTENTIONALLY LEFT BLANK 123 LIST OF REFERENCES [1] A. Mucciardi and E. Gose , “A comparison of seven techniques for

  10. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  11. An improved multi-domain convolution tracking algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Xin; Wang, Haiying; Zeng, Yingsen

    2018-04-01

    Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.

  12. Comparison of different classification algorithms for underwater target discrimination.

    PubMed

    Li, Donghui; Azimi-Sadjadi, Mahmood R; Robinson, Marc

    2004-01-01

    Classification of underwater targets from the acoustic backscattered signals is considered here. Several different classification algorithms are tested and benchmarked not only for their performance but also to gain insight to the properties of the feature space. Results on a wideband 80-kHz acoustic backscattered data set collected for six different objects are presented in terms of the receiver operating characteristic (ROC) and robustness of the classifiers wrt reverberation.

  13. Research on Aircraft Target Detection Algorithm Based on Improved Radial Gradient Transformation

    NASA Astrophysics Data System (ADS)

    Zhao, Z. M.; Gao, X. M.; Jiang, D. N.; Zhang, Y. Q.

    2018-04-01

    Aiming at the problem that the target may have different orientation in the unmanned aerial vehicle (UAV) image, the target detection algorithm based on the rotation invariant feature is studied, and this paper proposes a method of RIFF (Rotation-Invariant Fast Features) based on look up table and polar coordinate acceleration to be used for aircraft target detection. The experiment shows that the detection performance of this method is basically equal to the RIFF, and the operation efficiency is greatly improved.

  14. In-silico Leishmania target selectivity of antiparasitic terpenoids.

    PubMed

    Ogungbe, Ifedayo Victor; Setzer, William N

    2013-07-03

    Neglected Tropical Diseases (NTDs), like leishmaniasis, are major causes of mortality in resource-limited countries. The mortality associated with these diseases is largely due to fragile healthcare systems, lack of access to medicines, and resistance by the parasites to the few available drugs. Many antiparasitic plant-derived isoprenoids have been reported, and many of them have good in vitro activity against various forms of Leishmania spp. In this work, potential Leishmania biochemical targets of antiparasitic isoprenoids were studied in silico. Antiparasitic monoterpenoids selectively docked to L. infantum nicotinamidase, L. major uridine diphosphate-glucose pyrophosphorylase and methionyl t-RNA synthetase. The two protein targets selectively targeted by germacranolide sesquiterpenoids were L. major methionyl t-RNA synthetase and dihydroorotate dehydrogenase. Diterpenoids generally favored docking to L. mexicana glycerol-3-phosphate dehydrogenase. Limonoids also showed some selectivity for L. mexicana glycerol-3-phosphate dehydrogenase and L. major dihydroorotate dehydrogenase while withanolides docked more selectively with L. major uridine diphosphate-glucose pyrophosphorylase. The selectivity of the different classes of antiparasitic compounds for the protein targets considered in this work can be explored in fragment- and/or structure-based drug design towards the development of leads for new antileishmanial drugs.

  15. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    NASA Astrophysics Data System (ADS)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  16. Target-oriented imaging of hydraulic fractures by applying the staining algorithm for downhole microseismic migration

    NASA Astrophysics Data System (ADS)

    Lin, Ye; Zhang, Haijiang; Jia, Xiaofeng

    2018-03-01

    For microseismic monitoring of hydraulic fracturing, microseismic migration can be used to image the fracture network with scattered microseismic waves. Compared with conventional microseismic location-based fracture characterization methods, microseismic migration can better constrain the stimulated reservoir volume regardless of the completeness of detected and located microseismic sources. However, the imaging results from microseismic migration may suffer from the contamination of other structures and thus the target fracture zones may not be illuminated properly. To solve this issue, in this study we propose a target-oriented staining algorithm for microseismic reverse-time migration. In the staining algorithm, the target area is first stained by constructing an imaginary velocity field and then a synchronized source wavefield only concerning the target structure is produced. As a result, a synchronized image from imaging with the synchronized source wavefield mainly contains the target structures. Synthetic tests based on a downhole microseismic monitoring system show that the target-oriented microseismic reverse-time migration method improves the illumination of target areas.

  17. Application of a Dynamic Programming Algorithm for Weapon Target Assignment

    DTIC Science & Technology

    2016-02-01

    25] A . Turan , “Techniques for the Allocation of Resources Under Uncertainty,” Middle Eastern Technical University, Ankara, Turkey, 2012. [26] K...UNCLASSIFIED UNCLASSIFIED Application of a Dynamic Programming Algorithm for Weapon Target Assignment Lloyd Hammond Weapons and...optimisation techniques to support the decision making process. This report documents the methodology used to identify, develop and assess a

  18. [Effect of algorithms for calibration set selection on quantitatively determining asiaticoside content in Centella total glucosides by near infrared spectroscopy].

    PubMed

    Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang

    2014-12-01

    The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.

  19. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    NASA Astrophysics Data System (ADS)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  20. Automatic motor task selection via a bandit algorithm for a brain-controlled button

    NASA Astrophysics Data System (ADS)

    Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen

    2013-02-01

    Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.

  1. Feature extraction algorithm for space targets based on fractal theory

    NASA Astrophysics Data System (ADS)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  2. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  3. A review of channel selection algorithms for EEG signal processing

    NASA Astrophysics Data System (ADS)

    Alotaiby, Turky; El-Samie, Fathi E. Abd; Alshebeili, Saleh A.; Ahmad, Ishtiaq

    2015-12-01

    Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.

  4. An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks

    PubMed Central

    Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed

    2016-01-01

    Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586

  5. Computational design of nanoparticle drug delivery systems for selective targeting

    NASA Astrophysics Data System (ADS)

    Duncan, Gregg A.; Bevan, Michael A.

    2015-09-01

    Ligand-functionalized nanoparticles capable of selectively binding to diseased versus healthy cell populations are attractive for improved efficacy of nanoparticle-based drug and gene therapies. However, nanoparticles functionalized with high affinity targeting ligands may lead to undesired off-target binding to healthy cells. In this work, Monte Carlo simulations were used to quantitatively determine net surface interactions, binding valency, and selectivity between targeted nanoparticles and cell surfaces. Dissociation constant, KD, and target membrane protein density, ρR, are explored over a range representative of healthy and cancerous cell surfaces. Our findings show highly selective binding to diseased cell surfaces can be achieved with multiple, weaker affinity targeting ligands that can be further optimized by varying the targeting ligand density, ρL. Using the approach developed in this work, nanomedicines can be optimally designed for exclusively targeting diseased cells and tissues.Ligand-functionalized nanoparticles capable of selectively binding to diseased versus healthy cell populations are attractive for improved efficacy of nanoparticle-based drug and gene therapies. However, nanoparticles functionalized with high affinity targeting ligands may lead to undesired off-target binding to healthy cells. In this work, Monte Carlo simulations were used to quantitatively determine net surface interactions, binding valency, and selectivity between targeted nanoparticles and cell surfaces. Dissociation constant, KD, and target membrane protein density, ρR, are explored over a range representative of healthy and cancerous cell surfaces. Our findings show highly selective binding to diseased cell surfaces can be achieved with multiple, weaker affinity targeting ligands that can be further optimized by varying the targeting ligand density, ρL. Using the approach developed in this work, nanomedicines can be optimally designed for exclusively targeting

  6. Feature selection method based on multi-fractal dimension and harmony search algorithm and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na

    2016-10-01

    Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.

  7. An Expanded Lateral Interactive Clonal Selection Algorithm and Its Application

    NASA Astrophysics Data System (ADS)

    Gao, Shangce; Dai, Hongwei; Zhang, Jianchen; Tang, Zheng

    Based on the clonal selection principle proposed by Burnet, in the immune response process there is no crossover of genetic material between members of the repertoire, i. e., there is no knowledge communication during different elite pools in the previous clonal selection models. As a result, the search performance of these models is ineffective. To solve this problem, inspired by the concept of the idiotypic network theory, an expanded lateral interactive clonal selection algorithm (LICS) is put forward. In LICS, an antibody is matured not only through the somatic hypermutation and the receptor editing from the B cell, but also through the stimuli from other antibodies. The stimuli is realized by memorizing some common gene segment on the idiotypes, based on which a lateral interactive receptor editing operator is also introduced. Then, LICS is applied to several benchmark instances of the traveling salesman problem. Simulation results show the efficiency and robustness of LICS when compared to other traditional algorithms.

  8. Target selection biases from recent experience transfer across effectors.

    PubMed

    Moher, Jeff; Song, Joo-Hyun

    2016-02-01

    Target selection is often biased by an observer's recent experiences. However, not much is known about whether these selection biases influence behavior across different effectors. For example, does looking at a red object make it easier to subsequently reach towards another red object? In the current study, we asked observers to find the uniquely colored target object on each trial. Randomly intermixed pre-trial cues indicated the mode of action: either an eye movement or a visually guided reach movement to the target. In Experiment 1, we found that priming of popout, reflected in faster responses following repetition of the target color on consecutive trials, occurred regardless of whether the effector was repeated from the previous trial or not. In Experiment 2, we examined whether an inhibitory selection bias away from a feature could transfer across effectors. While priming of popout reflects both enhancement of the repeated target features and suppression of the repeated distractor features, the distractor previewing effect isolates a purely inhibitory component of target selection in which a previewed color is presented in a homogenous display and subsequently inhibited. Much like priming of popout, intertrial suppression biases in the distractor previewing effect transferred across effectors. Together, these results suggest that biases for target selection driven by recent trial history transfer across effectors. This indicates that representations in memory that bias attention towards or away from specific features are largely independent from their associated actions.

  9. Multi-Target Angle Tracking Algorithm for Bistatic MIMO Radar Based on the Elements of the Covariance Matrix

    PubMed Central

    Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo

    2018-01-01

    In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar. PMID:29518957

  10. Combinatorial Optimization in Project Selection Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dewi, Sari; Sawaluddin

    2018-01-01

    This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.

  11. SPIDERS: selection of spectroscopic targets using AGN candidates detected in all-sky X-ray surveys

    NASA Astrophysics Data System (ADS)

    Dwelly, T.; Salvato, M.; Merloni, A.; Brusa, M.; Buchner, J.; Anderson, S. F.; Boller, Th.; Brandt, W. N.; Budavári, T.; Clerc, N.; Coffey, D.; Del Moro, A.; Georgakakis, A.; Green, P. J.; Jin, C.; Menzel, M.-L.; Myers, A. D.; Nandra, K.; Nichol, R. C.; Ridl, J.; Schwope, A. D.; Simm, T.

    2017-07-01

    SPIDERS (SPectroscopic IDentification of eROSITA Sources) is a Sloan Digital Sky Survey IV (SDSS-IV) survey running in parallel to the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) cosmology project. SPIDERS will obtain optical spectroscopy for large numbers of X-ray-selected active galactic nuclei (AGN) and galaxy cluster members detected in wide-area eROSITA, XMM-Newton and ROSAT surveys. We describe the methods used to choose spectroscopic targets for two sub-programmes of SPIDERS X-ray selected AGN candidates detected in the ROSAT All Sky and the XMM-Newton Slew surveys. We have exploited a Bayesian cross-matching algorithm, guided by priors based on mid-IR colour-magnitude information from the Wide-field Infrared Survey Explorer survey, to select the most probable optical counterpart to each X-ray detection. We empirically demonstrate the high fidelity of our counterpart selection method using a reference sample of bright well-localized X-ray sources collated from XMM-Newton, Chandra and Swift-XRT serendipitous catalogues, and also by examining blank-sky locations. We describe the down-selection steps which resulted in the final set of SPIDERS-AGN targets put forward for spectroscopy within the eBOSS/TDSS/SPIDERS survey, and present catalogues of these targets. We also present catalogues of ˜12 000 ROSAT and ˜1500 XMM-Newton Slew survey sources that have existing optical spectroscopy from SDSS-DR12, including the results of our visual inspections. On completion of the SPIDERS programme, we expect to have collected homogeneous spectroscopic redshift information over a footprint of ˜7500 deg2 for >85 per cent of the ROSAT and XMM-Newton Slew survey sources having optical counterparts in the magnitude range 17 < r < 22.5, producing a large and highly complete sample of bright X-ray-selected AGN suitable for statistical studies of AGN evolution and clustering.

  12. A novel algorithm for finding optimal driver nodes to target control complex networks and its applications for drug targets identification.

    PubMed

    Guo, Wei-Feng; Zhang, Shao-Wu; Shi, Qian-Qian; Zhang, Cheng-Ming; Zeng, Tao; Chen, Luonan

    2018-01-19

    The advances in target control of complex networks not only can offer new insights into the general control dynamics of complex systems, but also be useful for the practical application in systems biology, such as discovering new therapeutic targets for disease intervention. In many cases, e.g. drug target identification in biological networks, we usually require a target control on a subset of nodes (i.e., disease-associated genes) with minimum cost, and we further expect that more driver nodes consistent with a certain well-selected network nodes (i.e., prior-known drug-target genes). Therefore, motivated by this fact, we pose and address a new and practical problem called as target control problem with objectives-guided optimization (TCO): how could we control the interested variables (or targets) of a system with the optional driver nodes by minimizing the total quantity of drivers and meantime maximizing the quantity of constrained nodes among those drivers. Here, we design an efficient algorithm (TCOA) to find the optional driver nodes for controlling targets in complex networks. We apply our TCOA to several real-world networks, and the results support that our TCOA can identify more precise driver nodes than the existing control-fucus approaches. Furthermore, we have applied TCOA to two bimolecular expert-curate networks. Source code for our TCOA is freely available from http://sysbio.sibcb.ac.cn/cb/chenlab/software.htm or https://github.com/WilfongGuo/guoweifeng . In the previous theoretical research for the full control, there exists an observation and conclusion that the driver nodes tend to be low-degree nodes. However, for target control the biological networks, we find interestingly that the driver nodes tend to be high-degree nodes, which is more consistent with the biological experimental observations. Furthermore, our results supply the novel insights into how we can efficiently target control a complex system, and especially many evidences on the

  13. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling

    PubMed Central

    Alshamlan, Hala; Badr, Ghada; Alohali, Yousef

    2015-01-01

    An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028

  14. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling.

    PubMed

    Alshamlan, Hala; Badr, Ghada; Alohali, Yousef

    2015-01-01

    An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.

  15. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    NASA Astrophysics Data System (ADS)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  16. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  17. Systematic optimization model and algorithm for binding sequence selection in computational enzyme design

    PubMed Central

    Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan

    2013-01-01

    A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589

  18. CDTI target selection criteria

    NASA Technical Reports Server (NTRS)

    Britt, C. L.; Davis, C. M.; Jackson, C. B.; Mcclellan, V. A.

    1984-01-01

    A Cockpit Display of Traffic Information (CDTI) is a cockpit instrument which provides information to the aircrew on the relative location of aircraft traffic in the vicinity of their aircraft (township). In addition, the CDTI may provide information to assist in navigation and in aircraft control. It is usually anticipated that the CDTI will be integrated with a horizontal situation indicator used for navigational purposes and/or with a weather radar display. In this study, several sets of aircraft traffic data are analyzed to determine statistics on the number of targets that will be displayed on a CDTI using various target selection criteria. Traffic data were obtained from an Atlanta Terminal Area Simulation and from radar tapes recorded at the Atlanta and Miami terminal areas. Results are given in the form of plots showing the average percentage of time (or probability) that an aircraft equipped with a CDTI would observe from 0 to 10 other aircraft on the display for range settings on the CDTI up to 30 n. mi. and using various target discrimination techniques.

  19. Kernel Extended Real-Valued Negative Selection Algorithm (KERNSA)

    DTIC Science & Technology

    2013-06-01

    are discarded, which is similar to how T-cells function in the BIS. An unlabeled, future sample is considered non -self if any detectors match it. This...Affinity Performs Best With Each type of Dataset 65 5.1.4 More Kernel Functions . . . . . . . . . . . . . . . . . . . . . . . . 65 5.1.5 Automate the...13 2.5 The Negative Selection Algorithm (NSA). . . . . . . . . . . . . . . . . . . . . 16 2.6 Illustration of self and non -self

  20. Selecting a Targeting Method to Identify BPL Households in India

    ERIC Educational Resources Information Center

    Alkire, Sabina; Seth, Suman

    2013-01-01

    This paper proposes how to select a methodology to target multidimensionally poor households, and how to update that targeting exercise periodically. We present this methodology in the context of discussions regarding the selection of a targeting methodology in India. In 1992, 1997, and 2002 the Indian government identified households that are…

  1. The admissible portfolio selection problem with transaction costs and an improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Zhang, Wei-Guo

    2010-05-01

    In this paper, we discuss the portfolio selection problem with transaction costs under the assumption that there exist admissible errors on expected returns and risks of assets. We propose a new admissible efficient portfolio selection model and design an improved particle swarm optimization (PSO) algorithm because traditional optimization algorithms fail to work efficiently for our proposed problem. Finally, we offer a numerical example to illustrate the proposed effective approaches and compare the admissible portfolio efficient frontiers under different constraints.

  2. High affinity ligands from in vitro selection: Complex targets

    PubMed Central

    Morris, Kevin N.; Jensen, Kirk B.; Julin, Carol M.; Weil, Michael; Gold, Larry

    1998-01-01

    Human red blood cell membranes were used as a model system to determine if the systematic evolution of ligands by exponential enrichment (SELEX) methodology, an in vitro protocol for isolating high-affinity oligonucleotides that bind specifically to virtually any single protein, could be used with a complex mixture of potential targets. Ligands to multiple targets were generated simultaneously during the selection process, and the binding affinities of these ligands for their targets are comparable to those found in similar experiments against pure targets. A secondary selection scheme, deconvolution-SELEX, facilitates rapid isolation of the ligands to targets of special interest within the mixture. SELEX provides high-affinity compounds for multiple targets in a mixture and might allow a means for dissecting complex biological systems. PMID:9501188

  3. Autonomous Selection of a Rover Laser Target on Mars

    NASA Image and Video Library

    2016-07-21

    NASA's Curiosity Mars rover autonomously selects some of the targets for the laser and telescopic camera of the rover's Chemistry and Camera (ChemCam) instrument. For example, on-board software analyzed the image on the left, chose the target highlighted with the yellow dot, and pointed ChemCam to acquire laser analysis and the image on the right. Most ChemCam targets are still selected by scientists discussing rocks or soil seen in images the rover has sent to Earth, but the autonomous targeting provides an added capability. It can offer a head start on acquiring composition information at a location just reached by a drive. The software for target selection and instrument pointing is called AEGIS, for Autonomous Exploration for Gathering Increased Science. The image on the left was taken by the left eye of Curiosity's stereo Navigation Camera (Navcam) a few minutes after the rover completed a drive of about 43 feet (13 meters) on July 14, 2016, during the 1,400th Martian day, or sol, of the rover's work on Mars. Using AEGIS for target selection and pointing based on the Navcam imagery, Curiosity's ChemCam zapped a grid of nine points on a rock chosen for meeting criteria set by the science team. In this run, parameters were set to find bright-toned outcrop rock rather than darker rocks, which in this area tend to be loose on the surface. Within less than 30 minutes after the Navcam image was taken, ChemCam had used its laser on all nine points and had taken before-and-after images of the target area with its remote micro-imager (RMI) camera. The image at right combines those two RMI exposures. The nine laser targets are marked in red at the center. On the Navcam image at left, the yellow dot identifies the selected target area, which is about 2.2 inches (5.6 centimeters) in diameter. An unannotated version of this Sol 1400 Navcam image is available. ChemCam records spectra of glowing plasma generated when the laser hits a target point. These spectra provide

  4. Motor cortex guides selection of predictable movement targets

    PubMed Central

    Woodgate, Philip J.W.; Strauss, Soeren; Sami, Saber A.; Heinke, Dietmar

    2016-01-01

    The present paper asks whether the motor cortex contributes to prediction-based guidance of target selection. This question was inspired by recent evidence that suggests (i) recurrent connections from the motor system into the attentional system may extract movement-relevant perceptual information and (ii) that the motor cortex cannot only generate predictions of the sensory consequences of movements but may also operate as predictor of perceptual events in general. To test this idea we employed a choice reaching task requiring participants to rapidly reach and touch a predictable or unpredictable colour target. Motor cortex activity was modulated via transcranial direct current stimulation (tDCS). In Experiment 1 target colour repetitions were predictable. Under such conditions anodal tDCS facilitated selection versus sham and cathodal tDCS. This improvement was apparent for trajectory curvature but not movement initiation. Conversely, where no predictability of colour was embedded reach performance was unaffected by tDCS. Finally, the results of a key-press experiment suggested that motor cortex involvement is restricted to tasks where the predictable target colour is movement-relevant. The outcomes are interpreted as evidence that the motor system contributes to the top-down guidance of selective attention to movement targets. PMID:25835319

  5. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions.more » The experiments show that this method can improve the recognition rate and the time of feature extraction.« less

  6. Adaptive block online learning target tracking based on super pixel segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Yue; Li, Jianzeng

    2018-04-01

    Video target tracking technology under the unremitting exploration of predecessors has made big progress, but there are still lots of problems not solved. This paper proposed a new algorithm of target tracking based on image segmentation technology. Firstly we divide the selected region using simple linear iterative clustering (SLIC) algorithm, after that, we block the area with the improved density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm. Each sub-block independently trained classifier and tracked, then the algorithm ignore the failed tracking sub-block while reintegrate the rest of the sub-blocks into tracking box to complete the target tracking. The experimental results show that our algorithm can work effectively under occlusion interference, rotation change, scale change and many other problems in target tracking compared with the current mainstream algorithms.

  7. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    NASA Astrophysics Data System (ADS)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  8. A combined joint diagonalization-MUSIC algorithm for subsurface targets localization

    NASA Astrophysics Data System (ADS)

    Wang, Yinlin; Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon

    2014-06-01

    This paper presents a combined joint diagonalization (JD) and multiple signal classification (MUSIC) algorithm for estimating subsurface objects locations from electromagnetic induction (EMI) sensor data, without solving ill-posed inverse-scattering problems. JD is a numerical technique that finds the common eigenvectors that diagonalize a set of multistatic response (MSR) matrices measured by a time-domain EMI sensor. Eigenvalues from targets of interest (TOI) can be then distinguished automatically from noise-related eigenvalues. Filtering is also carried out in JD to improve the signal-to-noise ratio (SNR) of the data. The MUSIC algorithm utilizes the orthogonality between the signal and noise subspaces in the MSR matrix, which can be separated with information provided by JD. An array of theoreticallycalculated Green's functions are then projected onto the noise subspace, and the location of the target is estimated by the minimum of the projection owing to the orthogonality. This combined method is applied to data from the Time-Domain Electromagnetic Multisensor Towed Array Detection System (TEMTADS). Examples of TEMTADS test stand data and field data collected at Spencer Range, Tennessee are analyzed and presented. Results indicate that due to its noniterative mechanism, the method can be executed fast enough to provide real-time estimation of objects' locations in the field.

  9. Satellite aerosol retrieval using dark target algorithm by coupling BRDF effect over AERONET site

    NASA Astrophysics Data System (ADS)

    Yang, Leiku; Xue, Yong; Guang, Jie; Li, Chi

    2012-11-01

    For most satellite aerosol retrieval algorithms even for multi-angle instrument, the simple forward model (FM) based on Lambertian surface assumption is employed to simulate top of the atmosphere (TOA) spectral reflectance, which does not fully consider the surface bi-directional reflectance functions (BRDF) effect. The approximating forward model largely simplifies the radiative transfer model, reduces the size of the look-up tables, and creates faster algorithm. At the same time, it creates systematic biases in the aerosol optical depth (AOD) retrieval. AOD product from the Moderate Resolution Imaging Spectro-radiometer (MODIS) data based on the dark target algorithm is considered as one of accurate satellite aerosol products at present. Though it performs well at a global scale, uncertainties are still found on regional in a lot of studies. The Lambertian surface assumpiton employed in the retrieving algorithm may be one of the uncertain factors. In this study, we first use radiative transfer simulations over dark target to assess the uncertainty to what extent is introduced from the Lambertian surface assumption. The result shows that the uncertainties of AOD retrieval could reach up to ±0.3. Then the Lambertian FM (L_FM) and the BRDF FM (BRDF_FM) are respectively employed in AOD retrieval using dark target algorithm from MODARNSS (MODIS/Terra and MODIS/Aqua Atmosphere Aeronet Subsetting Product) data over Beijing AERONET site. The validation shows that accuracy in AOD retrieval has been improved by employing the BRDF_FM accounting for the surface BRDF effect, the regression slope of scatter plots with retrieved AOD against AEROENET AOD increases from 0.7163 (for L_FM) to 0.7776 (for BRDF_FM) and the intercept decreases from 0.0778 (for L_FM) to 0.0627 (for BRDF_FM).

  10. Algorithm for Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar Gradiometer

    DTIC Science & Technology

    2016-06-01

    TECHNICAL REPORT Algorithm for Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar...Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar Gradiometer Leon Vaizer, Jesse Angle, Neil...of Magnetic Dipole Targets Using LSG i June 2016 TABLE OF CONTENTS INTRODUCTION

  11. Comparison of genetic algorithm and imperialist competitive algorithms in predicting bed load transport in clean pipe.

    PubMed

    Ebtehaj, Isa; Bonakdari, Hossein

    2014-01-01

    The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.

  12. Target Transformation Constrained Sparse Unmixing (ttcsu) Algorithm for Retrieving Hydrous Minerals on Mars: Application to Southwest Melas Chasma

    NASA Astrophysics Data System (ADS)

    Lin, H.; Zhang, X.; Wu, X.; Tarnas, J. D.; Mustard, J. F.

    2018-04-01

    Quantitative analysis of hydrated minerals from hyperspectral remote sensing data is fundamental for understanding Martian geologic process. Because of the difficulties for selecting endmembers from hyperspectral images, a sparse unmixing algorithm has been proposed to be applied to CRISM data on Mars. However, it's challenge when the endmember library increases dramatically. Here, we proposed a new methodology termed Target Transformation Constrained Sparse Unmixing (TTCSU) to accurately detect hydrous minerals on Mars. A new version of target transformation technique proposed in our recent work was used to obtain the potential detections from CRISM data. Sparse unmixing constrained with these detections as prior information was applied to CRISM single-scattering albedo images, which were calculated using a Hapke radiative transfer model. This methodology increases success rate of the automatic endmember selection of sparse unmixing and could get more accurate abundances. CRISM images with well analyzed in Southwest Melas Chasma was used to validate our methodology in this study. The sulfates jarosite was detected from Southwest Melas Chasma, the distribution is consistent with previous work and the abundance is comparable. More validations will be done in our future work.

  13. Human activity recognition based on feature selection in smart home using back-propagation algorithm.

    PubMed

    Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei

    2014-09-01

    In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Targeting of phage particles towards endothelial cells by antibodies selected through a multi-parameter selection strategy.

    PubMed

    Mandrup, Ole A; Lykkemark, Simon; Kristensen, Peter

    2017-02-10

    One of the hallmarks of cancer is sustained angiogenesis. Here, normal endothelial cells are activated, and their formation of new blood vessels leads to continued tumour growth. An improved patient condition is often observed when angiogenesis is prevented or normalized through targeting of these genomically stable endothelial cells. However, intracellular targets constitute a challenge in therapy, as the agents modulating these targets have to be delivered and internalized specifically to the endothelial cells. Selection of antibodies binding specifically to certain cell types is well established. It is nonetheless a challenge to ensure that the binding of antibodies to the target cell will mediate internalization. Previously selection of such antibodies has been performed targeting cancer cell lines; most often using either monovalent display or polyvalent display. In this article, we describe selections that isolate internalizing antibodies by sequential combining monovalent and polyvalent display using two types of helper phages, one which increases display valence and one which reduces background. One of the selected antibodies was found to mediate internalization into human endothelial cells, although our results confirms that the single stranded nature of the DNA packaged into phage particles may limit applications aimed at targeting nucleic acids in mammalian cells.

  15. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  16. An improved conscan algorithm based on a Kalman filter

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.

    1994-01-01

    Conscan is commonly used by DSN antennas to allow adaptive tracking of a target whose position is not precisely known. This article describes an algorithm that is based on a Kalman filter and is proposed to replace the existing fast Fourier transform based (FFT-based) algorithm for conscan. Advantages of this algorithm include better pointing accuracy, continuous update information, and accommodation of missing data. Additionally, a strategy for adaptive selection of the conscan radius is proposed. The performance of the algorithm is illustrated through computer simulations and compared to the FFT algorithm. The results show that the Kalman filter algorithm is consistently superior.

  17. Real-time implementation of a multispectral mine target detection algorithm

    NASA Astrophysics Data System (ADS)

    Samson, Joseph W.; Witter, Lester J.; Kenton, Arthur C.; Holloway, John H., Jr.

    2003-09-01

    Spatial-spectral anomaly detection (the "RX Algorithm") has been exploited on the USMC's Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) and several associated technology base studies, and has been found to be a useful method for the automated detection of surface-emplaced antitank land mines in airborne multispectral imagery. RX is a complex image processing algorithm that involves the direct spatial convolution of a target/background mask template over each multispectral image, coupled with a spatially variant background spectral covariance matrix estimation and inversion. The RX throughput on the ATD was about 38X real time using a single Sun UltraSparc system. A goal to demonstrate RX in real-time was begun in FY01. We now report the development and demonstration of a Field Programmable Gate Array (FPGA) solution that achieves a real-time implementation of the RX algorithm at video rates using COBRA ATD data. The approach uses an Annapolis Microsystems Firebird PMC card containing a Xilinx XCV2000E FPGA with over 2,500,000 logic gates and 18MBytes of memory. A prototype system was configured using a Tek Microsystems VME board with dual-PowerPC G4 processors and two PMC slots. The RX algorithm was translated from its C programming implementation into the VHDL language and synthesized into gates that were loaded into the FPGA. The VHDL/synthesizer approach allows key RX parameters to be quickly changed and a new implementation automatically generated. Reprogramming the FPGA is done rapidly and in-circuit. Implementation of the RX algorithm in a single FPGA is a major first step toward achieving real-time land mine detection.

  18. Research on the algorithm of infrared target detection based on the frame difference and background subtraction method

    NASA Astrophysics Data System (ADS)

    Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian

    2015-09-01

    As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.

  19. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.

    PubMed

    Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan

    2016-09-24

    This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.

  20. Strain gage selection in loads equations using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.

  1. Compromise Approach-Based Genetic Algorithm for Constrained Multiobjective Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Li, Jun

    In this paper, fuzzy set theory is incorporated into a multiobjective portfolio selection model for investors’ taking into three criteria: return, risk and liquidity. The cardinality constraint, the buy-in threshold constraint and the round-lots constraints are considered in the proposed model. To overcome the difficulty of evaluation a large set of efficient solutions and selection of the best one on non-dominated surface, a compromise approach-based genetic algorithm is presented to obtain a compromised solution for the proposed constrained multiobjective portfolio selection model.

  2. Target Selection for the SDSS-III MARVELS Survey

    NASA Astrophysics Data System (ADS)

    Paegert, Martin; Stassun, Keivan G.; De Lee, Nathan; Pepper, Joshua; Fleming, Scott W.; Sivarani, Thirupathi; Mahadevan, Suvrath; Mack, Claude E., III; Dhital, Saurav; Hebb, Leslie; Ge, Jian

    2015-06-01

    We present the target selection process for the Multi-object APO Radial Velocity Exoplanets Large-area Survey (MARVELS), which is part of the Sloan Digital Sky Survey (SDSS) III. MARVELS is a medium-resolution (R ∼ 11,000) multi-fiber spectrograph capable of obtaining radial velocities for 60 objects at a time in order to find brown dwarfs and giant planets. The survey was configured to target dwarf stars with effective temperatures approximately between 4500 and 6250 K. For the first 2 years MARVELS relied on low-resolution spectroscopic pre-observations to estimate the effective temperature and log (g) for candidate stars and then selected suitable dwarf stars from this pool. Ultimately, the pre-observation spectra proved ineffective at filtering out giant stars; many giants were incorrectly classified as dwarfs, resulting in a giant contamination rate of ∼30% for the first phase of the MARVELS survey. Thereafter, the survey instead applied a reduced proper motion cut to eliminate giants and used the Infrared Flux Method to estimate effective temperatures, using only extant photmetric and proper-motion catalog information. The target selection method introduced here may be useful for other surveys that need to rely on extant catalog data for selection of specific stellar populations.

  3. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    NASA Technical Reports Server (NTRS)

    Park, Thomas; Smith, Austin; Oliver, T. Emerson

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.

  4. Hybrid Binary Imperialist Competition Algorithm and Tabu Search Approach for Feature Selection Using Gene Expression Data.

    PubMed

    Wang, Shuaiqun; Aorigele; Kong, Wei; Zeng, Weiming; Hong, Xiaomin

    2016-01-01

    Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes.

  5. Hybrid Binary Imperialist Competition Algorithm and Tabu Search Approach for Feature Selection Using Gene Expression Data

    PubMed Central

    Aorigele; Zeng, Weiming; Hong, Xiaomin

    2016-01-01

    Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes. PMID:27579323

  6. Structural Implications for Selective Targeting of PARPs.

    PubMed

    Steffen, Jamin D; Brody, Jonathan R; Armen, Roger S; Pascal, John M

    2013-12-20

    Poly(ADP-ribose) polymerases (PARPs) are a family of enzymes that use NAD(+) as a substrate to synthesize polymers of ADP-ribose (PAR) as post-translational modifications of proteins. PARPs have important cellular roles that include preserving genomic integrity, telomere maintenance, transcriptional regulation, and cell fate determination. The diverse biological roles of PARPs have made them attractive therapeutic targets, which have fueled the pursuit of small molecule PARP inhibitors. The design of PARP inhibitors has matured over the past several years resulting in several lead candidates in clinical trials. PARP inhibitors are mainly used in clinical trials to treat cancer, particularly as sensitizing agents in combination with traditional chemotherapy to reduce side effects. An exciting aspect of PARP inhibitors is that they are also used to selectivity kill tumors with deficiencies in DNA repair proteins (e.g., BRCA1/2) through an approach termed "synthetic lethality." In the midst of the tremendous efforts that have brought PARP inhibitors to the forefront of modern chemotherapy, most clinically used PARP inhibitors bind to conserved regions that permits cross-selectivity with other PARPs containing homologous catalytic domains. Thus, the differences between therapeutic effects and adverse effects stemming from pan-PARP inhibition compared to selective inhibition are not well understood. In this review, we discuss current literature that has found ways to gain selectivity for one PARP over another. We furthermore provide insights into targeting other domains that make up PARPs, and how new classes of drugs that target these domains could provide a high degree of selectivity by affecting specific cellular functions. A clear understanding of the inhibition profiles of PARP inhibitors will not only enhance our understanding of the biology of individual PARPs, but may provide improved therapeutic options for patients.

  7. Structural Implications for Selective Targeting of PARPs

    PubMed Central

    Steffen, Jamin D.; Brody, Jonathan R.; Armen, Roger S.; Pascal, John M.

    2013-01-01

    Poly(ADP-ribose) polymerases (PARPs) are a family of enzymes that use NAD+ as a substrate to synthesize polymers of ADP-ribose (PAR) as post-translational modifications of proteins. PARPs have important cellular roles that include preserving genomic integrity, telomere maintenance, transcriptional regulation, and cell fate determination. The diverse biological roles of PARPs have made them attractive therapeutic targets, which have fueled the pursuit of small molecule PARP inhibitors. The design of PARP inhibitors has matured over the past several years resulting in several lead candidates in clinical trials. PARP inhibitors are mainly used in clinical trials to treat cancer, particularly as sensitizing agents in combination with traditional chemotherapy to reduce side effects. An exciting aspect of PARP inhibitors is that they are also used to selectivity kill tumors with deficiencies in DNA repair proteins (e.g., BRCA1/2) through an approach termed “synthetic lethality.” In the midst of the tremendous efforts that have brought PARP inhibitors to the forefront of modern chemotherapy, most clinically used PARP inhibitors bind to conserved regions that permits cross-selectivity with other PARPs containing homologous catalytic domains. Thus, the differences between therapeutic effects and adverse effects stemming from pan-PARP inhibition compared to selective inhibition are not well understood. In this review, we discuss current literature that has found ways to gain selectivity for one PARP over another. We furthermore provide insights into targeting other domains that make up PARPs, and how new classes of drugs that target these domains could provide a high degree of selectivity by affecting specific cellular functions. A clear understanding of the inhibition profiles of PARP inhibitors will not only enhance our understanding of the biology of individual PARPs, but may provide improved therapeutic options for patients. PMID:24392349

  8. Stationary-phase optimized selectivity liquid chromatography: development of a linear gradient prediction algorithm.

    PubMed

    De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat

    2010-03-01

    Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.

  9. Automatic peak selection by a Benjamini-Hochberg-based algorithm.

    PubMed

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into [Formula: see text]-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx.

  10. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    PubMed Central

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into -values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. PMID

  11. EMD self-adaptive selecting relevant modes algorithm for FBG spectrum signal

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Wu, Chun-ting; Liu, Huan-lin

    2017-07-01

    Noise may reduce the demodulation accuracy of fiber Bragg grating (FBG) sensing signal so as to affect the quality of sensing detection. Thus, the recovery of a signal from observed noisy data is necessary. In this paper, a precise self-adaptive algorithm of selecting relevant modes is proposed to remove the noise of signal. Empirical mode decomposition (EMD) is first used to decompose a signal into a set of modes. The pseudo modes cancellation is introduced to identify and eliminate false modes, and then the Mutual Information (MI) of partial modes is calculated. MI is used to estimate the critical point of high and low frequency components. Simulation results show that the proposed algorithm estimates the critical point more accurately than the traditional algorithms for FBG spectral signal. While, compared to the similar algorithms, the signal noise ratio of the signal can be improved more than 10 dB after processing by the proposed algorithm, and correlation coefficient can be increased by 0.5, so it demonstrates better de-noising effect.

  12. Network intrusion detection by the coevolutionary immune algorithm of artificial immune systems with clonal selection

    NASA Astrophysics Data System (ADS)

    Salamatova, T.; Zhukov, V.

    2017-02-01

    The paper presents the application of the artificial immune systems apparatus as a heuristic method of network intrusion detection for algorithmic provision of intrusion detection systems. The coevolutionary immune algorithm of artificial immune systems with clonal selection was elaborated. In testing different datasets the empirical results of evaluation of the algorithm effectiveness were achieved. To identify the degree of efficiency the algorithm was compared with analogs. The fundamental rules based of solutions generated by this algorithm are described in the article.

  13. Enhanced parent selection algorithms in mintroute protocol

    NASA Astrophysics Data System (ADS)

    Kim, Ki-Il

    2012-11-01

    A low-rate, short-range wireless radio communication on a small device often hampers high reliability in wireless sensor networks. However, more applications are increasingly demanding high reliability. To meet this requirement, various approaches have been proposed in each viewpoint of layers. Among those, MintRoute is a well-known network layer approach to develop a new metric based on link quality for path selection towards the sink. By choosing the link with the highest measured value, it has a higher possibility to transmit a packet over the link without error. However, there are still several issues to be mentioned during operations. In this paper, we propose how to improve the MintRoute protocol through revised algorithms. They include a parent selection considering distance and level from the sink node, and a fast recovery method against failures. Simulations and analysis are performed by in order to validate the suitability of reduced end-to-end delay and fast recovery for failures, thus to enhance the reliability of communication.

  14. Naturally selecting solutions: the use of genetic algorithms in bioinformatics.

    PubMed

    Manning, Timmy; Sleator, Roy D; Walsh, Paul

    2013-01-01

    For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.

  15. Evolution of egg target size: an analysis of selection on correlated characters.

    PubMed

    Podolsky, R D

    2001-12-01

    In broadcast-spawning marine organisms, chronic sperm limitation should select for traits that improve chances of sperm-egg contact. One mechanism may involve increasing the size of the physical or chemical target for sperm. However, models of fertilization kinetics predict that increasing egg size can reduce net zygote production due to an associated decline in fecundity. An alternate method for increasing physical target size is through addition of energetically inexpensive external structures, such as the jelly coats typical of eggs in species from several phyla. In selection experiments on eggs of the echinoid Dendraster excentricus, in which sperm was used as the agent of selection, eggs with larger overall targets were favored in fertilization. Actual shifts in target size following selection matched quantitative predictions of a model that assumed fertilization was proportional to target size. Jelly volume and ovum volume, two characters that contribute to target size, were correlated both within and among females. A cross-sectional analysis of selection partitioned the independent effects of these characters on fertilization success and showed that they experience similar direct selection pressures. Coupled with data on relative organic costs of the two materials, these results suggest that, under conditions where fertilization is limited by egg target size, selection should favor investment in low-cost accessory structures and may have a relatively weak effect on the evolution of ovum size.

  16. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-06-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM).

  17. An Artificial Bee Colony Algorithm for Uncertain Portfolio Selection

    PubMed Central

    Chen, Wei

    2014-01-01

    Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts' evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm. PMID:25089292

  18. An artificial bee colony algorithm for uncertain portfolio selection.

    PubMed

    Chen, Wei

    2014-01-01

    Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts' evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm.

  19. Purification-Free, Target-Selective Immobilization of a Protein from Cell Lysates.

    PubMed

    Cha, Jaehyun; Kwon, Inchan

    2018-02-27

    Protein immobilization has been widely used for laboratory experiments and industrial processes. Preparation of a recombinant protein for immobilization usually requires laborious and expensive purification steps. Here, a novel purification-free, target-selective immobilization technique of a protein from cell lysates is reported. Purification steps are skipped by immobilizing a target protein containing a clickable non-natural amino acid (p-azidophenylalanine) in cell lysates onto alkyne-functionalized solid supports via bioorthogonal azide-alkyne cycloaddition. In order to achieve a target protein-selective immobilization, p-azidophenylalanine was introduced into an exogenous target protein, but not into endogenous non-target proteins using host cells with amber codon-free genomic DNAs. Immobilization of superfolder fluorescent protein (sfGFP) from cell lysates is as efficient as that of the purified sfGFP. Using two fluorescent proteins (sfGFP and mCherry), the authors also demonstrated that the target proteins are immobilized with a minimal immobilization of non-target proteins (target-selective immobilization). © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Robust Bayesian Algorithm for Targeted Compound Screening in Forensic Toxicology.

    PubMed

    Woldegebriel, Michael; Gonsalves, John; van Asten, Arian; Vivó-Truyols, Gabriel

    2016-02-16

    As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically employed. For data analysis, almost all commonly applied algorithms are threshold-based (frequentist). These algorithms examine the value of a certain measurement (e.g., peak height) to decide whether a certain xenobiotic of interest (XOI) is present/absent, yielding a binary output. Frequentist methods pose a problem when several sources of information [e.g., shape of the chromatographic peak, isotopic distribution, estimated mass-to-charge ratio (m/z), adduct, etc.] need to be combined, requiring the approach to make arbitrary decisions at substep levels of data analysis. We hereby introduce a novel Bayesian probabilistic algorithm for toxicological screening. The method tackles the problem with a different strategy. It is not aimed at reaching a final conclusion regarding the presence of the XOI, but it estimates its probability. The algorithm effectively and efficiently combines all possible pieces of evidence from the chromatogram and calculates the posterior probability of the presence/absence of XOI features. This way, the model can accommodate more information by updating the probability if extra evidence is acquired. The final probabilistic result assists the end user to make a final decision with respect to the presence/absence of the xenobiotic. The Bayesian method was validated and found to perform better (in terms of false positives and false negatives) than the vendor-supplied software package.

  1. Guidance, navigation, and control subsystem equipment selection algorithm using expert system methods

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1991-01-01

    Enhanced engineering tools can be obtained through the integration of expert system methodologies and existing design software. The application of these methodologies to the spacecraft design and cost model (SDCM) software provides an improved technique for the selection of hardware for unmanned spacecraft subsystem design. The knowledge engineering system (KES) expert system development tool was used to implement a smarter equipment section algorithm than that which is currently achievable through the use of a standard data base system. The guidance, navigation, and control subsystems of the SDCM software was chosen as the initial subsystem for implementation. The portions of the SDCM code which compute the selection criteria and constraints remain intact, and the expert system equipment selection algorithm is embedded within this existing code. The architecture of this new methodology is described and its implementation is reported. The project background and a brief overview of the expert system is described, and once the details of the design are characterized, an example of its implementation is demonstrated.

  2. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  3. A polarized low-coherence interferometry demodulation algorithm by recovering the absolute phase of a selected monochromatic frequency.

    PubMed

    Jiang, Junfeng; Wang, Shaohua; Liu, Tiegen; Liu, Kun; Yin, Jinde; Meng, Xiange; Zhang, Yimo; Wang, Shuang; Qin, Zunqi; Wu, Fan; Li, Dingjie

    2012-07-30

    A demodulation algorithm based on absolute phase recovery of a selected monochromatic frequency is proposed for optical fiber Fabry-Perot pressure sensing system. The algorithm uses Fourier transform to get the relative phase and intercept of the unwrapped phase-frequency linear fit curve to identify its interference-order, which are then used to recover the absolute phase. A simplified mathematical model of the polarized low-coherence interference fringes was established to illustrate the principle of the proposed algorithm. Phase unwrapping and the selection of monochromatic frequency were discussed in detail. Pressure measurement experiment was carried out to verify the effectiveness of the proposed algorithm. Results showed that the demodulation precision by our algorithm could reach up to 0.15kPa, which has been improved by 13 times comparing with phase slope based algorithm.

  4. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  5. Real time tracking by LOPF algorithm with mixture model

    NASA Astrophysics Data System (ADS)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  6. PeptidePicker: a scientific workflow with web interface for selecting appropriate peptides for targeted proteomics experiments.

    PubMed

    Mohammed, Yassene; Domański, Dominik; Jackson, Angela M; Smith, Derek S; Deelder, André M; Palmblad, Magnus; Borchers, Christoph H

    2014-06-25

    One challenge in Multiple Reaction Monitoring (MRM)-based proteomics is to select the most appropriate surrogate peptides to represent a target protein. We present here a software package to automatically generate these most appropriate surrogate peptides for an LC/MRM-MS analysis. Our method integrates information about the proteins, their tryptic peptides, and the suitability of these peptides for MRM which is available online in UniProtKB, NCBI's dbSNP, ExPASy, PeptideAtlas, PRIDE, and GPMDB. The scoring algorithm reflects our knowledge in choosing the best candidate peptides for MRM, based on the uniqueness of the peptide in the targeted proteome, its physiochemical properties, and whether it previously has been observed. The modularity of the workflow allows further extension and additional selection criteria to be incorporated. We have developed a simple Web interface where the researcher provides the protein accession number, the subject organism, and peptide-specific options. Currently, the software is designed for human and mouse proteomes, but additional species can be easily be added. Our software improved the peptide selection by eliminating human error, considering multiple data sources and all of the isoforms of the protein, and resulted in faster peptide selection - approximately 50 proteins per hour compared to 8 per day. Compiling a list of optimal surrogate peptides for target proteins to be analyzed by LC/MRM-MS has been a cumbersome process, in which expert researchers retrieved information from different online repositories and used their own reasoning to find the most appropriate peptides. Our scientific workflow automates this process by integrating information from different data sources including UniProt, Global Proteome Machine, NCBI's dbSNP, and PeptideAtlas, simulating the researchers' reasoning, and incorporating their knowledge of how to select the best proteotypic peptides for an MRM analysis. The developed software can help to

  7. Firefly algorithm versus genetic algorithm as powerful variable selection tools and their effect on different multivariate calibration models in spectroscopy: A comparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2017-01-01

    For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.

  8. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    NASA Technical Reports Server (NTRS)

    Park, Thomas; Oliver, Emerson; Smith, Austin

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GN&C software from the set of healthy measurements. This paper provides an overview of the algorithms used for both fault-detection and measurement down selection.

  9. Covalent inhibitors: an opportunity for rational target selectivity.

    PubMed

    Lagoutte, Roman; Patouret, Remi; Winssinger, Nicolas

    2017-08-01

    There is a resurging interest in compounds that engage their target through covalent interactions. Cysteine's thiol is endowed with enhanced reactivity, making it the nucleophile of choice for covalent engagement with a ligand aligning an electrophilic trap with a cysteine residue in a target of interest. The paucity of cysteine in the proteome coupled to the fact that closely related proteins do not necessarily share a given cysteine residue enable a level of unprecedented rational target selectivity. The recent demonstration that a lysine's amine can also be engaged covalently with a mild electrophile extends the potential of covalent inhibitors. The growing database of protein structures facilitates the discovery of covalent inhibitors while the advent of proteomic technologies enables a finer resolution in the selectivity of covalently engaged proteins. Here, we discuss recent examples of discovery and design of covalent inhibitors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Neural correlates of target selection for reaching movements in superior colliculus

    PubMed Central

    McPeek, Robert M.

    2014-01-01

    We recently demonstrated that inactivation of the primate superior colliculus (SC) causes a deficit in target selection for arm-reaching movements when the reach target is located in the inactivated field (Song JH, Rafal RD, McPeek RM. Proc Natl Acad Sci USA 108: E1433–E1440, 2011). This is consistent with the notion that the SC is part of a general-purpose target selection network beyond eye movements. To understand better the role of SC activity in reach target selection, we examined how individual SC neurons in the intermediate layers discriminate a reach target from distractors. Monkeys reached to touch a color oddball target among distractors while maintaining fixation. We found that many SC neurons robustly discriminate the goal of the reaching movement before the onset of the reach even though no saccade is made. To identify these cells in the context of conventional SC cell classification schemes, we also recorded visual, delay-period, and saccade-related responses in a delayed saccade task. On average, SC cells that discriminated the reach target from distractors showed significantly higher visual and delay-period activity than nondiscriminating cells, but there was no significant difference in saccade-related activity. Whereas a majority of SC neurons that discriminated the reach target showed significant delay-period activity, all nondiscriminating cells lacked such activity. We also found that some cells without delay-period activity did discriminate the reach target from distractors. We conclude that the majority of intermediate-layer SC cells discriminate a reach target from distractors, consistent with the idea that the SC contains a priority map used for effector-independent target selection. PMID:25505107

  11. A greedy algorithm for species selection in dimension reduction of combustion chemistry

    NASA Astrophysics Data System (ADS)

    Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.

    2010-09-01

    Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.

  12. Selection of phage-displayed accessible recombinant targeted antibodies (SPARTA): methodology and applications.

    PubMed

    D'Angelo, Sara; Staquicini, Fernanda I; Ferrara, Fortunato; Staquicini, Daniela I; Sharma, Geetanjali; Tarleton, Christy A; Nguyen, Huynh; Naranjo, Leslie A; Sidman, Richard L; Arap, Wadih; Bradbury, Andrew Rm; Pasqualini, Renata

    2018-05-03

    We developed a potentially novel and robust antibody discovery methodology, termed selection of phage-displayed accessible recombinant targeted antibodies (SPARTA). This combines an in vitro screening step of a naive human antibody library against known tumor targets, with in vivo selections based on tumor-homing capabilities of a preenriched antibody pool. This unique approach overcomes several rate-limiting challenges to generate human antibodies amenable to rapid translation into medical applications. As a proof of concept, we evaluated SPARTA on 2 well-established tumor cell surface targets, EphA5 and GRP78. We evaluated antibodies that showed tumor-targeting selectivity as a representative panel of antibody-drug conjugates (ADCs) and were highly efficacious. Our results validate a discovery platform to identify and validate monoclonal antibodies with favorable tumor-targeting attributes. This approach may also extend to other diseases with known cell surface targets and affected tissues easily isolated for in vivo selection.

  13. The Target Selective Neural Response — Similarity, Ambiguity, and Learning Effects

    PubMed Central

    Hampshire, Adam; Thompson, Russell; Duncan, John; Owen, Adrian M.

    2008-01-01

    A network of frontal and parietal brain regions is commonly recruited during tasks that require the deliberate ‘top-down’ control of thought and action. Previously, using simple target detection, we have demonstrated that within this frontoparietal network, the right ventrolateral prefrontal cortex (VLPFC) in particular is sensitive to the presentation of target objects. Here, we use a range of target/non-target morphs to plot the target selective response within distinct frontoparietal sub-regions in greater detail. The increased resolution allows us to examine the extent to which different cognitive factors can predict the blood oxygenation level dependent (BOLD) response to targets. Our results reveal that both probability of positive identification (similarity to target) and proximity to the 50% decision boundary (ambiguity) are significant predictors of BOLD signal change, particularly in the right VLPFC. Furthermore, the profile of target related signal change is not static, with the degree of selectivity increasing as the task becomes familiar. These findings demonstrate that frontoparietal sub-regions are recruited under increased cognitive demand and that when recruited, they adapt, using both fast and slow mechanisms, to selectively respond to those items that are of the most relevance to current intentions. PMID:18575585

  14. Use of genetic algorithm for the selection of EEG features

    NASA Astrophysics Data System (ADS)

    Asvestas, P.; Korda, A.; Kostopoulos, S.; Karanasiou, I.; Ouzounoglou, A.; Sidiropoulos, K.; Ventouras, E.; Matsopoulos, G.

    2015-09-01

    Genetic Algorithm (GA) is a popular optimization technique that can detect the global optimum of a multivariable function containing several local optima. GA has been widely used in the field of biomedical informatics, especially in the context of designing decision support systems that classify biomedical signals or images into classes of interest. The aim of this paper is to present a methodology, based on GA, for the selection of the optimal subset of features that can be used for the efficient classification of Event Related Potentials (ERPs), which are recorded during the observation of correct or incorrect actions. In our experiment, ERP recordings were acquired from sixteen (16) healthy volunteers who observed correct or incorrect actions of other subjects. The brain electrical activity was recorded at 47 locations on the scalp. The GA was formulated as a combinatorial optimizer for the selection of the combination of electrodes that maximizes the performance of the Fuzzy C Means (FCM) classification algorithm. In particular, during the evolution of the GA, for each candidate combination of electrodes, the well-known (Σ, Φ, Ω) features were calculated and were evaluated by means of the FCM method. The proposed methodology provided a combination of 8 electrodes, with classification accuracy 93.8%. Thus, GA can be the basis for the selection of features that discriminate ERP recordings of observations of correct or incorrect actions.

  15. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions

    PubMed Central

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-01-01

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application. PMID:27924935

  16. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions.

    PubMed

    Huang, Shiqi; Huang, Wenzhun; Zhang, Ting

    2016-12-07

    The most distinctive characteristic of synthetic aperture radar (SAR) is that it can acquire data under all weather conditions and at all times. However, its coherent imaging mechanism introduces a great deal of speckle noise into SAR images, which makes the segmentation of target and shadow regions in SAR images very difficult. This paper proposes a new SAR image segmentation method based on wavelet decomposition and a constant false alarm rate (WD-CFAR). The WD-CFAR algorithm not only is insensitive to the speckle noise in SAR images but also can segment target and shadow regions simultaneously, and it is also able to effectively segment SAR images with a low signal-to-clutter ratio (SCR). Experiments were performed to assess the performance of the new algorithm on various SAR images. The experimental results show that the proposed method is effective and feasible and possesses good characteristics for general application.

  17. Feature extraction and selection strategies for automated target recognition

    NASA Astrophysics Data System (ADS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-04-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  18. Improved targeted immunization strategies based on two rounds of selection

    NASA Astrophysics Data System (ADS)

    Xia, Ling-Ling; Song, Yu-Rong; Li, Chan-Chan; Jiang, Guo-Ping

    2018-04-01

    In the case of high degree targeted immunization where the number of vaccine is limited, when more than one node associated with the same degree meets the requirement of high degree centrality, how can we choose a certain number of nodes from those nodes, so that the number of immunized nodes will not exceed the limit? In this paper, we introduce a new idea derived from the selection process of second-round exam to solve this problem and then propose three improved targeted immunization strategies. In these proposed strategies, the immunized nodes are selected through two rounds of selection, where we increase the quotas of first-round selection according the evaluation criterion of degree centrality and then consider another characteristic parameter of node, such as node's clustering coefficient, betweenness and closeness, to help choose targeted nodes in the second-round selection. To validate the effectiveness of the proposed strategies, we compare them with the degree immunizations including the high degree targeted and the high degree adaptive immunizations using two metrics: the size of the largest connected component of immunized network and the number of infected nodes. Simulation results demonstrate that the proposed strategies based on two rounds of sorting are effective for heterogeneous networks and their immunization effects are better than that of the degree immunizations.

  19. Highly selective luminescent nanostructures for mitochondrial imaging and targeting

    NASA Astrophysics Data System (ADS)

    Fanizza, E.; Iacobazzi, R. M.; Laquintana, V.; Valente, G.; Caliandro, G.; Striccoli, M.; Agostiano, A.; Cutrignelli, A.; Lopedota, A.; Curri, M. L.; Franco, M.; Depalo, N.; Denora, N.

    2016-02-01

    Here a luminescent hybrid nanostructure based on functionalized quantum dots (QDs) is used as a fluorescent imaging agent able to target selectively mitochondria thanks to the molecular recognition of the translocator protein (TSPO). The selective targeting of such an 18 kDa protein mainly located in the outer mitochondrial membrane and overexpressed in several pathological states including neurodegenerative diseases and cancers may provide valuable information for the early diagnosis and therapy of human disorders. In particular, the rational design of amino functionalized luminescent silica coated QD nanoparticles (QD@SiO2 NPs) provides a versatile nanoplatform to anchor a potent and selective TSPO ligand, characterized by a 2-phenyl-imidazo[1,2-a]pyridine acetamide structure along with a derivatizable carboxylic end group, useful to conjugate the TSPO ligand and achieve TSPO-QD@SiO2 NPs by means of a covalent amide bond. The colloidal stability and optical properties of the proposed nanomaterials are comprehensively investigated and their potential as mitochondrial imaging agents is fully assessed. Sub-cellular fractionation, together with confocal laser scanning fluorescence microscopy and co-localization analysis of targeted TSPO-QD@SiO2 NPs in C6 glioma cells overexpressing the TSPO, proves the great potential of these multifunctional nanosystems as in vitro selective mitochondrial imaging agents.Here a luminescent hybrid nanostructure based on functionalized quantum dots (QDs) is used as a fluorescent imaging agent able to target selectively mitochondria thanks to the molecular recognition of the translocator protein (TSPO). The selective targeting of such an 18 kDa protein mainly located in the outer mitochondrial membrane and overexpressed in several pathological states including neurodegenerative diseases and cancers may provide valuable information for the early diagnosis and therapy of human disorders. In particular, the rational design of amino

  20. An Improved Compressive Sensing and Received Signal Strength-Based Target Localization Algorithm with Unknown Target Population for Wireless Local Area Networks.

    PubMed

    Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang

    2017-05-30

    In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.

  1. Selective Mitochondrial Targeting Exerts Anxiolytic Effects In Vivo.

    PubMed

    Nussbaumer, Markus; Asara, John M; Teplytska, Larysa; Murphy, Michael P; Logan, Angela; Turck, Christoph W; Filiou, Michaela D

    2016-06-01

    Current treatment strategies for anxiety disorders are predominantly symptom-based. However, a third of anxiety patients remain unresponsive to anxiolytics highlighting the need for more effective, mechanism-based therapeutic approaches. We have previously compared high vs low anxiety mice and identified changes in mitochondrial pathways, including oxidative phosphorylation and oxidative stress. In this work, we show that selective pharmacological targeting of these mitochondrial pathways exerts anxiolytic effects in vivo. We treated high anxiety-related behavior (HAB) mice with MitoQ, an antioxidant that selectively targets mitochondria. MitoQ administration resulted in decreased anxiety-related behavior in HAB mice. This anxiolytic effect was specific for high anxiety as MitoQ treatment did not affect the anxiety phenotype of C57BL/6N and DBA/2J mouse strains. We furthermore investigated the molecular underpinnings of the MitoQ-driven anxiolytic effect and found that MitoQ treatment alters the brain metabolome and that the response to MitoQ treatment is characterized by distinct molecular signatures. These results indicate that a mechanism-driven approach based on selective mitochondrial targeting has the potential to attenuate the high anxiety phenotype in vivo, thus paving the way for translational implementation as long-term MitoQ administration is well-tolerated with no reported side effects in mice and humans.

  2. Ultra-Low-Power MEMS Selective Gas Sensors

    NASA Technical Reports Server (NTRS)

    Stetter, Joseph

    2012-01-01

    This innovation is a system for gas sensing that includes an ultra-low-power MEMS (microelectromechanical system) gas sensor, combined with unique electronic circuitry and a proprietary algorithm for operating the sensor. The electronics were created from scratch, and represent a novel design capable of low-power operation of the proprietary MEMS gas sensor platform. The algorithm is used to identify a specific target gas in a gas mixture, making the sensor selective to that target gas.

  3. Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters

    NASA Astrophysics Data System (ADS)

    Vasumathi, B.; Moorthi, S.

    2011-11-01

    In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.

  4. Selection of optimum median-filter-based ambiguity removal algorithm parameters for NSCAT. [NASA scatterometer

    NASA Technical Reports Server (NTRS)

    Shaffer, Scott; Dunbar, R. Scott; Hsiao, S. Vincent; Long, David G.

    1989-01-01

    The NASA Scatterometer, NSCAT, is an active spaceborne radar designed to measure the normalized radar backscatter coefficient (sigma0) of the ocean surface. These measurements can, in turn, be used to infer the surface vector wind over the ocean using a geophysical model function. Several ambiguous wind vectors result because of the nature of the model function. A median-filter-based ambiguity removal algorithm will be used by the NSCAT ground data processor to select the best wind vector from the set of ambiguous wind vectors. This process is commonly known as dealiasing or ambiguity removal. The baseline NSCAT ambiguity removal algorithm and the method used to select the set of optimum parameter values are described. An extensive simulation of the NSCAT instrument and ground data processor provides a means of testing the resulting tuned algorithm. This simulation generates the ambiguous wind-field vectors expected from the instrument as it orbits over a set of realistic meoscale wind fields. The ambiguous wind field is then dealiased using the median-based ambiguity removal algorithm. Performance is measured by comparison of the unambiguous wind fields with the true wind fields. Results have shown that the median-filter-based ambiguity removal algorithm satisfies NSCAT mission requirements.

  5. Dosing algorithm to target a predefined AUC in patients with primary central nervous system lymphoma receiving high dose methotrexate.

    PubMed

    Joerger, Markus; Ferreri, Andrés J M; Krähenbühl, Stephan; Schellens, Jan H M; Cerny, Thomas; Zucca, Emanuele; Huitema, Alwin D R

    2012-02-01

    There is no consensus regarding optimal dosing of high dose methotrexate (HDMTX) in patients with primary CNS lymphoma. Our aim was to develop a convenient dosing algorithm to target AUC(MTX) in the range between 1000 and 1100 µmol l(-1) h. A population covariate model from a pooled dataset of 131 patients receiving HDMTX was used to simulate concentration-time curves of 10,000 patients and test the efficacy of a dosing algorithm based on 24 h MTX plasma concentrations to target the prespecified AUC(MTX) . These data simulations included interindividual, interoccasion and residual unidentified variability. Patients received a total of four simulated cycles of HDMTX and adjusted MTX dosages were given for cycles two to four. The dosing algorithm proposes MTX dose adaptations ranging from +75% in patients with MTX C(24) < 0.5 µmol l(-1) up to -35% in patients with MTX C(24) > 12 µmol l(-1). The proposed dosing algorithm resulted in a marked improvement of the proportion of patients within the AUC(MTX) target between 1000 and 1100 µmol l(-1) h (11% with standard MTX dose, 35% with the adjusted dose) and a marked reduction of the interindividual variability of MTX exposure. A simple and practical dosing algorithm for HDMTX has been developed based on MTX 24 h plasma concentrations, and its potential efficacy in improving the proportion of patients within a prespecified target AUC(MTX) and reducing the interindividual variability of MTX exposure has been shown by data simulations. The clinical benefit of this dosing algorithm should be assessed in patients with primary central nervous system lymphoma (PCNSL). © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.

  6. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  7. Optimum location of external markers using feature selection algorithms for real-time tumor tracking in external-beam radiotherapy: a virtual phantom study.

    PubMed

    Nankali, Saber; Torshabi, Ahmad Esmaili; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-08

    In external-beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation-based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two "Genetic" and "Ranker" searching procedures. The performance of these algorithms has been evaluated using four-dimensional extended cardiac-torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro-fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F-test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation-based feature selection algorithm, in

  8. Multi-Target Angle Tracking Algorithm for Bistatic Multiple-Input Multiple-Output (MIMO) Radar Based on the Elements of the Covariance Matrix.

    PubMed

    Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo

    2018-03-07

    In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.

  9. Effects of Mode of Target Task Selection on Learning about Plants in a Mobile Learning Environment: Effortful Manual Selection versus Effortless QR-Code Selection

    ERIC Educational Resources Information Center

    Gao, Yuan; Liu, Tzu-Chien; Paas, Fred

    2016-01-01

    This study compared the effects of effortless selection of target plants using quick respond (QR) code technology to effortful manual search and selection of target plants on learning about plants in a mobile device supported learning environment. In addition, it was investigated whether the effectiveness of the 2 selection methods was…

  10. Two novel motion-based algorithms for surveillance video analysis on embedded platforms

    NASA Astrophysics Data System (ADS)

    Vijverberg, Julien A.; Loomans, Marijn J. H.; Koeleman, Cornelis J.; de With, Peter H. N.

    2010-05-01

    This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.

  11. Hyperspectral data collection for the assessment of target detection algorithms: the Viareggio 2013 trial

    NASA Astrophysics Data System (ADS)

    Rossi, Alessandro; Acito, Nicola; Diani, Marco; Corsini, Giovanni; De Ceglie, Sergio Ugo; Riccobono, Aldo; Chiarantini, Leandro

    2014-10-01

    Airborne hyperspectral imagery is valuable for military and civilian applications, such as target identification, detection of anomalies and changes within multiple acquisitions. In target detection (TD) applications, the performance assessment of different algorithms is an important and critical issue. In this context, the small number of public available hyperspectral data motivated us to perform an extensive measurement campaign including various operating scenarios. The campaign was organized by CISAM in cooperation with University of Pisa, Selex ES and CSSN-ITE, and it was conducted in Viareggio, Italy in May, 2013. The Selex ES airborne hyperspectral sensor SIM.GA was mounted on board of an airplane to collect images over different sites in the morning and afternoon of two subsequent days. This paper describes the hyperspectral data collection of the trial. Four different sites were set up, representing a complex urban scenario, two parking lots and a rural area. Targets with dimensions comparable to the sensor ground resolution were deployed in the sites to reproduce different operating situations. An extensive ground truth documentation completes the data collection. Experiments to test anomalous change detection techniques were set up changing the position of the deployed targets. Search and rescue scenarios were simulated to evaluate the performance of anomaly detection algorithms. Moreover, the reflectance signatures of the targets were measured on the ground to perform spectral matching in varying atmospheric and illumination conditions. The paper presents some preliminary results that show the effectiveness of hyperspectral data exploitation for the object detection tasks of interest in this work.

  12. Local anesthesia selection algorithm in patients with concomitant somatic diseases.

    PubMed

    Anisimova, E N; Sokhov, S T; Letunova, N Y; Orekhova, I V; Gromovik, M V; Erilin, E A; Ryazantsev, N A

    2016-01-01

    The paper presents basic principles of local anesthesia selection in patients with concomitant somatic diseases. These principles are history taking; analysis of drugs interaction with local anesthetic and sedation agents; determination of the functional status of the patient; patient anxiety correction; dental care with monitoring of hemodynamics parameters. It was found that adhering to this algorithm promotes prevention of urgent conditions in patients in outpatient dentistry.

  13. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2015-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  14. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2016-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  15. Firefly algorithm versus genetic algorithm as powerful variable selection tools and their effect on different multivariate calibration models in spectroscopy: A comparative study.

    PubMed

    Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed

    2017-01-05

    For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. The SDSS-XDQSO quasar targeting catalog

    NASA Astrophysics Data System (ADS)

    Bovy, Jo; Hennawi, J. F.; Hogg, D. W.; Myers, A. D.; Ross, N. P.

    2011-01-01

    We present the SDSS-XDQSO quasar targeting catalog for efficient flux-based quasar target selection down to the faint limit of the SDSS catalog, even at medium redshifts (2.5 < z < 3). We build models of the distributions of stars and quasars in flux space down to the flux limit by applying the extreme-deconvolution method (XD) to estimate the underlying density. We properly convolve this density with the flux uncertainties when evaluating the probability that an object is a quasar. This results in a targeting algorithm that is more principled, more efficient, and faster than other similar methods. We apply the algorithm to derive low- (z < 2.2), medium- (2.2 <= z 3.5) quasar probabilities for all 160,904,060 point-sources with dereddened i-and magnitude between 17.75 and 22.45 mag in SDSS Data Release 8. The catalog can be used to define a uniformly selected and efficient low- or medium-redshift quasar survey, such as that needed for the SDSS-III Baryon Oscillation Spectroscopic Survey project. We show that the XDQSO technique performs as well as the current best photometric quasar selection technique at low redshift, and out-performs all other flux-based methods for selecting the medium-redshift quasars of our primary interest. Research supported by NASA (grant NNX08AJ48G) and the NSF (grant AST-0908357).

  17. a Genetic Algorithm Based on Sexual Selection for the Multidimensional 0/1 Knapsack Problems

    NASA Astrophysics Data System (ADS)

    Varnamkhasti, Mohammad Jalali; Lee, Lai Soon

    In this study, a new technique is presented for choosing mate chromosomes during sexual selection in a genetic algorithm. The population is divided into groups of males and females. During the sexual selection, the female chromosome is selected by the tournament selection while the male chromosome is selected based on the hamming distance from the selected female chromosome, fitness value or active genes. Computational experiments are conducted on the proposed technique and the results are compared with some selection mechanisms commonly used for solving multidimensional 0/1 knapsack problems published in the literature.

  18. Target-object integration, attention distribution, and object orientation interactively modulate object-based selection.

    PubMed

    Al-Janabi, Shahd; Greenberg, Adam S

    2016-10-01

    The representational basis of attentional selection can be object-based. Various studies have suggested, however, that object-based selection is less robust than spatial selection across experimental paradigms. We sought to examine the manner by which the following factors might explain this variation: Target-Object Integration (targets 'on' vs. part 'of' an object), Attention Distribution (narrow vs. wide), and Object Orientation (horizontal vs. vertical). In Experiment 1, participants discriminated between two targets presented 'on' an object in one session, or presented as a change 'of' an object in another session. There was no spatial cue-thus, attention was initially focused widely-and the objects were horizontal or vertical. We found evidence of object-based selection only when targets constituted a change 'of' an object. Additionally, object orientation modulated the sign of object-based selection: We observed a same-object advantage for horizontal objects, but a same-object cost for vertical objects. In Experiment 2, an informative cue preceded a single target presented 'on' an object or as a change 'of' an object (thus, attention was initially focused narrowly). Unlike in Experiment 1, we found evidence of object-based selection independent of target-object integration. We again found that the sign of selection was modulated by the objects' orientation. This result may reflect a meridian effect, which emerged due to anisotropies in the cortical representations when attention is oriented endogenously. Experiment 3 revealed that object orientation did not modulate object-based selection when attention was oriented exogenously. Our findings suggest that target-object integration, attention distribution, and object orientation modulate object-based selection, but only in combination.

  19. Target recognition of ladar range images using slice image: comparison of four improved algorithms

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang

    2017-07-01

    Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.

  20. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    NASA Astrophysics Data System (ADS)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.

  1. Improved training for target detection using Fukunaga-Koontz transform and distance classifier correlation filter

    NASA Astrophysics Data System (ADS)

    Elbakary, M. I.; Alam, M. S.; Aslan, M. S.

    2008-03-01

    In a FLIR image sequence, a target may disappear permanently or may reappear after some frames and crucial information such as direction, position and size related to the target are lost. If the target reappears at a later frame, it may not be tracked again because the 3D orientation, size and location of the target might be changed. To obtain information about the target before disappearing and to detect the target after reappearing, distance classifier correlation filter (DCCF) is trained manualy by selecting a number of chips randomly. This paper introduces a novel idea to eliminates the manual intervention in training phase of DCCF. Instead of selecting the training chips manually and selecting the number of the training chips randomly, we adopted the K-means algorithm to cluster the training frames and based on the number of clusters we select the training chips such that a training chip for each cluster. To detect and track the target after reappearing in the field-ofview ,TBF and DCCF are employed. The contduced experiemnts using real FLIR sequences show results similar to the traditional agorithm but eleminating the manual intervention is the advantage of the proposed algorithm.

  2. Algorithm for AEEG data selection leading to wireless and long term epilepsy monitoring.

    PubMed

    Casson, Alexander J; Yates, David C; Patel, Shyam; Rodriguez-Villegas, Esther

    2007-01-01

    High quality, wireless ambulatory EEG (AEEG) systems that can operate over extended periods of time are not currently feasible due to the high power consumption of wireless transmitters. Previous work has thus proposed data reduction by only transmitting sections of data that contain candidate epileptic activity. This paper investigates algorithms by which this data selection can be carried out. It is essential that the algorithm is low power and that all possible features are identified, even at the expense of more false detections. Given this, a brief review of spike detection algorithms is carried out with a view to using these algorithms to drive the data reduction process. A CWT based algorithm is deemed most suitable for use and an algorithm is described in detail and its performance tested. It is found that over 90% of expert marked spikes are identified whilst giving a 40% reduction in the amount of data to be transmitted and analysed. The performance varies with the recording duration in response to each detection and this effect is also investigated. The proposed algorithm will form the basis of new a AEEG system that allows wireless and longer term epilepsy monitoring.

  3. Selective Mitochondrial Targeting Exerts Anxiolytic Effects In Vivo

    PubMed Central

    Nussbaumer, Markus; Asara, John M; Teplytska, Larysa; Murphy, Michael P; Logan, Angela; Turck, Christoph W; Filiou, Michaela D

    2016-01-01

    Current treatment strategies for anxiety disorders are predominantly symptom-based. However, a third of anxiety patients remain unresponsive to anxiolytics highlighting the need for more effective, mechanism-based therapeutic approaches. We have previously compared high vs low anxiety mice and identified changes in mitochondrial pathways, including oxidative phosphorylation and oxidative stress. In this work, we show that selective pharmacological targeting of these mitochondrial pathways exerts anxiolytic effects in vivo. We treated high anxiety-related behavior (HAB) mice with MitoQ, an antioxidant that selectively targets mitochondria. MitoQ administration resulted in decreased anxiety-related behavior in HAB mice. This anxiolytic effect was specific for high anxiety as MitoQ treatment did not affect the anxiety phenotype of C57BL/6N and DBA/2J mouse strains. We furthermore investigated the molecular underpinnings of the MitoQ-driven anxiolytic effect and found that MitoQ treatment alters the brain metabolome and that the response to MitoQ treatment is characterized by distinct molecular signatures. These results indicate that a mechanism-driven approach based on selective mitochondrial targeting has the potential to attenuate the high anxiety phenotype in vivo, thus paving the way for translational implementation as long-term MitoQ administration is well-tolerated with no reported side effects in mice and humans. PMID:26567514

  4. Selection and trajectory design to mission secondary targets

    NASA Astrophysics Data System (ADS)

    Victorino Sarli, Bruno; Kawakatsu, Yasuhiro

    2017-02-01

    Recently, with new trajectory design techniques and use of low-thrust propulsion systems, missions have become more efficient and cheaper with respect to propellant. As a way to increase the mission's value and scientific return, secondary targets close to the main trajectory are often added with a small change in the transfer trajectory. As a result of their large number, importance and facility to perform a flyby, asteroids are commonly used as such targets. This work uses the Primer Vector theory to define the direction and magnitude of the thrust for a minimum fuel consumption problem. The design of a low-thrust trajectory with a midcourse asteroid flyby is not only challenging for the low-thrust problem solution, but also with respect to the selection of a target and its flyby point. Currently more than 700,000 minor bodies have been identified, which generates a very large number of possible flyby points. This work uses a combination of reachability, reference orbit, and linear theory to select appropriate candidates, drastically reducing the simulation time, to be later included in the main trajectory and optimized. Two test cases are presented using the aforementioned selection process and optimization to add and design a secondary flyby to a mission with the primary objective of 3200 Phaethon flyby and 25143 Itokawa rendezvous.

  5. Dissecting patterns of preparatory activity in the frontal eye fields during pursuit target selection.

    PubMed

    Raghavan, Ramanujan T; Joshua, Mati

    2017-10-01

    We investigated the composition of preparatory activity of frontal eye field (FEF) neurons in monkeys performing a pursuit target selection task. In response to the orthogonal motion of a large and a small reward target, monkeys initiated pursuit biased toward the direction of large reward target motion. FEF neurons exhibited robust preparatory activity preceding movement initiation in this task. Preparatory activity consisted of two components, ramping activity that was constant across target selection conditions, and a flat offset in firing rates that signaled the target selection condition. Ramping activity accounted for 50% of the variance in the preparatory activity and was linked most strongly, on a trial-by-trial basis, to pursuit eye movement latency rather than to its direction or gain. The offset in firing rates that discriminated target selection conditions accounted for 25% of the variance in the preparatory activity and was commensurate with a winner-take-all representation, signaling the direction of large reward target motion rather than a representation that matched the parameters of the upcoming movement. These offer new insights into the role that the frontal eye fields play in target selection and pursuit control. They show that preparatory activity in the FEF signals more strongly when to move rather than where or how to move and suggest that structures outside the FEF augment its contributions to the target selection process. NEW & NOTEWORTHY We used the smooth eye movement pursuit system to link between patterns of preparatory activity in the frontal eye fields and movement during a target selection task. The dominant pattern was a ramping signal that did not discriminate between selection conditions and was linked, on trial-by-trial basis, to movement latency. A weaker pattern was composed of a constant signal that discriminated between selection conditions but was only weakly linked to the movement parameters. Copyright © 2017 the American

  6. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  7. The production route selection algorithm in virtual manufacturing networks

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2017-08-01

    The increasing requirements and competition in the global market are challenges for the companies profitability in production and supply chain management. This situation became the basis for construction of virtual organizations, which are created in response to temporary needs. The problem of the production flow planning in virtual manufacturing networks is considered. In the paper the algorithm of the production route selection from the set of admissible routes, which meets the technology and resource requirements and in the context of the criterion of minimum cost is proposed.

  8. Optimum location of external markers using feature selection algorithms for real‐time tumor tracking in external‐beam radiotherapy: a virtual phantom study

    PubMed Central

    Nankali, Saber; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-01

    In external‐beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation‐based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two “Genetic” and “Ranker” searching procedures. The performance of these algorithms has been evaluated using four‐dimensional extended cardiac‐torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro‐fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F‐test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation‐based feature

  9. Signal-Noise Identification of Magnetotelluric Signals Using Fractal-Entropy and Clustering Algorithm for Targeted De-Noising

    NASA Astrophysics Data System (ADS)

    Li, Jin; Zhang, Xian; Gong, Jinzhe; Tang, Jingtian; Ren, Zhengyong; Li, Guang; Deng, Yanli; Cai, Jin

    A new technique is proposed for signal-noise identification and targeted de-noising of Magnetotelluric (MT) signals. This method is based on fractal-entropy and clustering algorithm, which automatically identifies signal sections corrupted by common interference (square, triangle and pulse waves), enabling targeted de-noising and preventing the loss of useful information in filtering. To implement the technique, four characteristic parameters — fractal box dimension (FBD), higuchi fractal dimension (HFD), fuzzy entropy (FuEn) and approximate entropy (ApEn) — are extracted from MT time-series. The fuzzy c-means (FCM) clustering technique is used to analyze the characteristic parameters and automatically distinguish signals with strong interference from the rest. The wavelet threshold (WT) de-noising method is used only to suppress the identified strong interference in selected signal sections. The technique is validated through signal samples with known interference, before being applied to a set of field measured MT/Audio Magnetotelluric (AMT) data. Compared with the conventional de-noising strategy that blindly applies the filter to the overall dataset, the proposed method can automatically identify and purposefully suppress the intermittent interference in the MT/AMT signal. The resulted apparent resistivity-phase curve is more continuous and smooth, and the slow-change trend in the low-frequency range is more precisely reserved. Moreover, the characteristic of the target-filtered MT/AMT signal is close to the essential characteristic of the natural field, and the result more accurately reflects the inherent electrical structure information of the measured site.

  10. A real-time dynamic-MLC control algorithm for delivering IMRT to targets undergoing 2D rigid motion in the beam's eye view.

    PubMed

    McMahon, Ryan; Berbeco, Ross; Nishioka, Seiko; Ishikawa, Masayori; Papiez, Lech

    2008-09-01

    An MLC control algorithm for delivering intensity modulated radiation therapy (IMRT) to targets that are undergoing two-dimensional (2D) rigid motion in the beam's eye view (BEV) is presented. The goal of this method is to deliver 3D-derived fluence maps over a moving patient anatomy. Target motion measured prior to delivery is first used to design a set of planned dynamic-MLC (DMLC) sliding-window leaf trajectories. During actual delivery, the algorithm relies on real-time feedback to compensate for target motion that does not agree with the motion measured during planning. The methodology is based on an existing one-dimensional (ID) algorithm that uses on-the-fly intensity calculations to appropriately adjust the DMLC leaf trajectories in real-time during exposure delivery [McMahon et al., Med. Phys. 34, 3211-3223 (2007)]. To extend the 1D algorithm's application to 2D target motion, a real-time leaf-pair shifting mechanism has been developed. Target motion that is orthogonal to leaf travel is tracked by appropriately shifting the positions of all MLC leaves. The performance of the tracking algorithm was tested for a single beam of a fractionated IMRT treatment, using a clinically derived intensity profile and a 2D target trajectory based on measured patient data. Comparisons were made between 2D tracking, 1D tracking, and no tracking. The impact of the tracking lag time and the frequency of real-time imaging were investigated. A study of the dependence of the algorithm's performance on the level of agreement between the motion measured during planning and delivery was also included. Results demonstrated that tracking both components of the 2D motion (i.e., parallel and orthogonal to leaf travel) results in delivered fluence profiles that are superior to those that track the component of motion that is parallel to leaf travel alone. Tracking lag time effects may lead to relatively large intensity delivery errors compared to the other sources of error investigated

  11. Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network

    PubMed Central

    Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan

    2014-01-01

    Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667

  12. Materials Selection Criteria for Nuclear Power Applications: A Decision Algorithm

    NASA Astrophysics Data System (ADS)

    Rodríguez-Prieto, Álvaro; Camacho, Ana María; Sebastián, Miguel Ángel

    2016-02-01

    An innovative methodology based on stringency levels is proposed in this paper and improves the current selection method for structural materials used in demanding industrial applications. This paper describes a new approach for quantifying the stringency of materials requirements based on a novel deterministic algorithm to prevent potential failures. We have applied the new methodology to different standardized specifications used in pressure vessels design, such as SA-533 Grade B Cl.1, SA-508 Cl.3 (issued by the American Society of Mechanical Engineers), DIN 20MnMoNi55 (issued by the German Institute of Standardization) and 16MND5 (issued by the French Nuclear Commission) specifications and determine the influence of design code selection. This study is based on key scientific publications on the influence of chemical composition on the mechanical behavior of materials, which were not considered when the technological requirements were established in the aforementioned specifications. For this purpose, a new method to quantify the efficacy of each standard has been developed using a deterministic algorithm. The process of assigning relative weights was performed by consulting a panel of experts in materials selection for reactor pressure vessels to provide a more objective methodology; thus, the resulting mathematical calculations for quantitative analysis are greatly simplified. The final results show that steel DIN 20MnMoNi55 is the best material option. Additionally, more recently developed materials such as DIN 20MnMoNi55, 16MND5 and SA-508 Cl.3 exhibit mechanical requirements more stringent than SA-533 Grade B Cl.1. The methodology presented in this paper can be used as a decision tool in selection of materials for a wide range of applications.

  13. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  14. USE OF POPULATION VIABILITY ANALYSIS AND RESERVE SELECTION ALGORITHMS IN REGIONAL CONSERVATION PLANS

    EPA Science Inventory

    Current reserve selection algorithms have difficulty evaluating connectivity and other factors
    necessary to conserve wide-ranging species in developing landscapes. Conversely, population viability analyses may incorporate detailed demographic data but often lack sufficient spa...

  15. Polynomial algorithms for the Maximal Pairing Problem: efficient phylogenetic targeting on arbitrary trees

    PubMed Central

    2010-01-01

    Background The Maximal Pairing Problem (MPP) is the prototype of a class of combinatorial optimization problems that are of considerable interest in bioinformatics: Given an arbitrary phylogenetic tree T and weights ωxy for the paths between any two pairs of leaves (x, y), what is the collection of edge-disjoint paths between pairs of leaves that maximizes the total weight? Special cases of the MPP for binary trees and equal weights have been described previously; algorithms to solve the general MPP are still missing, however. Results We describe a relatively simple dynamic programming algorithm for the special case of binary trees. We then show that the general case of multifurcating trees can be treated by interleaving solutions to certain auxiliary Maximum Weighted Matching problems with an extension of this dynamic programming approach, resulting in an overall polynomial-time solution of complexity (n4 log n) w.r.t. the number n of leaves. The source code of a C implementation can be obtained under the GNU Public License from http://www.bioinf.uni-leipzig.de/Software/Targeting. For binary trees, we furthermore discuss several constrained variants of the MPP as well as a partition function approach to the probabilistic version of the MPP. Conclusions The algorithms introduced here make it possible to solve the MPP also for large trees with high-degree vertices. This has practical relevance in the field of comparative phylogenetics and, for example, in the context of phylogenetic targeting, i.e., data collection with resource limitations. PMID:20525185

  16. Selective tuning of the right inferior frontal gyrus during target detection

    PubMed Central

    Hampshire, Adam; Thompson, Russell; Duncan, John; Owen, Adrian M.

    2010-01-01

    In the human brain, a network of frontal and parietal regions is commonly recruited during tasks that demand the deliberate, focused control of thought and action. Previously, using a simple target detection task, we reported striking differences in the selectivity of the BOLD response in anatomically distinct subregions of this network. In particular, it was observed that the right inferior frontal gyrus (IFG) followed a tightly tuned function, selectively responding only to the current target object. Here, we examine this functional specialization further, using adapted versions of our original task. Our results demonstrate that the response of the right IFG to targets is a strong and replicable phenomenon. It occurs under increased attentional load, when targets and distractors are equally frequent, and when controlling for inhibitory processes. These findings support the hypothesis that the right IFG responds selectively to those items that are of the most relevance to the currently intended task schema. PMID:19246331

  17. The control of attentional target selection in a colour/colour conjunction task.

    PubMed

    Berggren, Nick; Eimer, Martin

    2016-11-01

    To investigate the time course of attentional object selection processes in visual search tasks where targets are defined by a combination of features from the same dimension, we measured the N2pc component as an electrophysiological marker of attentional object selection during colour/colour conjunction search. In Experiment 1, participants searched for targets defined by a combination of two colours, while ignoring distractor objects that matched only one of these colours. Reliable N2pc components were triggered by targets and also by partially matching distractors, even when these distractors were accompanied by a target in the same display. The target N2pc was initially equal in size to the sum of the two N2pc components to the two different types of partially matching distractors and became superadditive from approximately 250 ms after search display onset. Experiment 2 demonstrated that the superadditivity of the target N2pc was not due to a selective disengagement of attention from task-irrelevant partially matching distractors. These results indicate that attention was initially deployed separately and in parallel to all target-matching colours, before attentional allocation processes became sensitive to the presence of both matching colours within the same object. They suggest that attention can be controlled simultaneously and independently by multiple features from the same dimension and that feature-guided attentional selection processes operate in parallel for different target-matching objects in the visual field.

  18. Selective in vivo metabolic cell-labeling-mediated cancer targeting

    PubMed Central

    Wang, Hua; Wang, Ruibo; Cai, Kaimin; He, Hua; Liu, Yang; Yen, Jonathan; Wang, Zhiyu; Xu, Ming; Sun, Yiwen; Zhou, Xin; Yin, Qian; Tang, Li; Dobrucki, Iwona T; Dobrucki, Lawrence W; Chaney, Eric J; Boppart, Stephen A; Fan, Timothy M; Lezmi, Stéphane; Chen, Xuesi; Yin, Lichen; Cheng, Jianjun

    2017-01-01

    Distinguishing cancer cells from normal cells through surface receptors is vital for cancer diagnosis and targeted therapy. Metabolic glycoengineering of unnatural sugars provides a powerful tool to manually introduce chemical receptors onto the cell surface; however, cancer-selective labeling still remains a great challenge. Herein we report the design of sugars that can selectively label cancer cells both in vitro and in vivo. Specifically, we inhibit the cell-labeling activity of tetraacetyl-N-azidoacetylmannosamine (Ac4ManAz) by converting its anomeric acetyl group to a caged ether bond that can be selectively cleaved by cancer-overexpressed enzymes and thus enables the overexpression of azido groups on the surface of cancer cells. Histone deacetylase and cathepsin L-responsive acetylated azidomannosamine, one such enzymatically activatable Ac4ManAz analog developed, mediated cancer-selective labeling in vivo, which enhanced tumor accumulation of a dibenzocyclooctyne–doxorubicin conjugate via click chemistry and enabled targeted therapy against LS174T colon cancer, MDA-MB-231 triple-negative breast cancer and 4T1 metastatic breast cancer in mice. PMID:28192414

  19. Target detection using the background model from the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Dorado Munoz, Leidy P.; Messinger, David W.; Ziemann, Amanda K.

    2013-05-01

    The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.

  20. Input Control Processes in Rapid Serial Visual Presentations: Target Selection and Distractor Inhibition

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Watson, Derrick G.

    2006-01-01

    The attentional blink refers to the finding that the 2nd of 2 targets embedded in a stream of rapidly presented distractors is often missed. Whereas most theories of the attentional blink focus on limited-capacity processes that occur after target selection, the present work investigates the selection process itself. Identifying a target letter…

  1. Dependency of human target detection performance on clutter and quality of supporting image analysis algorithms in a video surveillance task

    NASA Astrophysics Data System (ADS)

    Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin

    2017-10-01

    Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.

  2. ROBIN: a platform for evaluating automatic target recognition algorithms: II. Protocols used for evaluating algorithms and results obtained on the SAGEM DS database

    NASA Astrophysics Data System (ADS)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    Over the five past years, the computer vision community has explored many different avenues of research for Automatic Target Recognition. Noticeable advances have been made and we are now in the situation where large-scale evaluations of ATR technologies have to be carried out, to determine what the limitations of the recently proposed methods are and to determine the best directions for future works. ROBIN, which is a project funded by the French Ministry of Defence and by the French Ministry of Research, has the ambition of being a new reference for benchmarking ATR algorithms in operational contexts. This project, headed by major companies and research centers involved in Computer Vision R&D in the field of Defense (Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES) recently released a large dataset of several thousands of hand-annotated infrared and RGB images of different targets in different situations. Setting up an evaluation campaign requires us to define, accurately and carefully, sets of data (both for training ATR algorithms and for their evaluation), tasks to be evaluated, and finally protocols and metrics for the evaluation. ROBIN offers interesting contributions to each one of these three points. This paper first describes, justifies and defines the set of functions used in the ROBIN competitions and relevant for evaluating ATR algorithms (Detection, Localization, Recognition and Identification). It also defines the metrics and the protocol used for evaluating these functions. In the second part of the paper, the results obtained by several state-of-the-art algorithms on the SAGEM DS database (a subpart of ROBIN) are presented and discussed

  3. Salience-Based Selection: Attentional Capture by Distractors Less Salient Than the Target

    PubMed Central

    Goschy, Harriet; Müller, Hermann Joseph

    2013-01-01

    Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience. PMID:23382820

  4. Comparison of 3-D Multi-Lag Cross-Correlation and Speckle Brightness Aberration Correction Algorithms on Static and Moving Targets

    PubMed Central

    Ivancevich, Nikolas M.; Dahl, Jeremy J.; Smith, Stephen W.

    2010-01-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively. PMID:19942503

  5. Comparison of 3-D multi-lag cross- correlation and speckle brightness aberration correction algorithms on static and moving targets.

    PubMed

    Ivancevich, Nikolas M; Dahl, Jeremy J; Smith, Stephen W

    2009-10-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively.

  6. SU-E-J-150: Four-Dimensional Cone-Beam CT Algorithm by Extraction of Physical and Motion Parameter of Mobile Targets Retrospective to Image Reconstruction with Motion Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Ahmad, S; Alsbou, N

    Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulatemore » respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing

  7. An algorithm for automatic target recognition using passive radar and an EKF for estimating aircraft orientation

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.

    2005-07-01

    Rather than emitting pulses, passive radar systems rely on "illuminators of opportunity," such as TV and FM radio, to illuminate potential targets. These systems are attractive since they allow receivers to operate without emitting energy, rendering them covert. Until recently, most of the research regarding passive radar has focused on detecting and tracking targets. This dissertation focuses on extending the capabilities of passive radar systems to include automatic target recognition. The target recognition algorithm described in this dissertation uses the radar cross section (RCS) of potential targets, collected over a short period of time, as the key information for target recognition. To make the simulated RCS as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. An extended Kalman filter (EKF) estimates the target's orientation (and uncertainty in the estimate) from velocity measurements obtained from the passive radar tracker. Coupling the aircraft orientation and state with the known antenna locations permits computation of the incident and observed azimuth and elevation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of potential target classes as a function of these angles. Thus, the approximated incident and observed angles allow the appropriate RCS to be extracted from a database of FISC results. Using this process, the RCS of each aircraft in the target class is simulated as though each is executing the same maneuver as the target detected by the system. Two additional scaling processes are required to transform the RCS into a power profile (magnitude only) simulating the signal in the receiver. First, the RCS is scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. Then, the Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern

  8. MultiMiTar: a novel multi objective optimization based miRNA-target prediction method.

    PubMed

    Mitra, Ramkrishna; Bandyopadhyay, Sanghamitra

    2011-01-01

    Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. MultiMiTar is found to achieve much higher Matthew's correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from -0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes Multi

  9. Centroid tracker and aimpoint selection

    NASA Astrophysics Data System (ADS)

    Venkateswarlu, Ronda; Sujata, K. V.; Venkateswara Rao, B.

    1992-11-01

    Autonomous fire and forget weapons have gained importance to achieve accurate first pass kill by hitting the target at an appropriate aim point. Centroid of the image presented by a target in the field of view (FOV) of a sensor is generally accepted as the aimpoint for these weapons. Centroid trackers are applicable only when the target image is of significant size in the FOV of the sensor but does not overflow the FOV. But as the range between the sensor and the target decreases the image of the target will grow and finally overflow the FOV at close ranges and the centroid point on the target will keep on changing which is not desirable. And also centroid need not be the most desired/vulnerable point on the target. For hardened targets like tanks, proper aimpoint selection and guidance up to almost zero range is essential to achieve maximum kill probability. This paper presents a centroid tracker realization. As centroid offers a stable tracking point, it can be used as a reference to select the proper aimpoint. The centroid and the desired aimpoint are simultaneously tracked to avoid jamming by flares and also to take care of the problems arising due to image overflow. Thresholding of gray level image to binary image is a crucial step in centroid tracker. Different thresholding algorithms are discussed and a suitable algorithm is chosen. The real-time hardware implementation of centroid tracker with a suitable thresholding technique is presented including the interfacing to a multimode tracker for autonomous target tracking and aimpoint selection. The hardware uses very high speed arithmetic and programmable logic devices to meet the speed requirement and a microprocessor based subsystem for the system control. The tracker has been evaluated in a field environment.

  10. Localization Algorithm with On-line Path Loss Estimation and Node Selection

    PubMed Central

    Bel, Albert; Vicario, José López; Seco-Granados, Gonzalo

    2011-01-01

    RSS-based localization is considered a low-complexity algorithm with respect to other range techniques such as TOA or AOA. The accuracy of RSS methods depends on the suitability of the propagation models used for the actual propagation conditions. In indoor environments, in particular, it is very difficult to obtain a good propagation model. For that reason, we present a cooperative localization algorithm that dynamically estimates the path loss exponent by using RSS measurements. Since the energy consumption is a key point in sensor networks, we propose a node selection mechanism to limit the number of neighbours of a given node that are used for positioning purposes. Moreover, the selection mechanism is also useful to discard bad links that could negatively affect the performance accuracy. As a result, we derive a practical solution tailored to the strict requirements of sensor networks in terms of complexity, size and cost. We present results based on both computer simulations and real experiments with the Crossbow MICA2 motes showing that the proposed scheme offers a good trade-off in terms of position accuracy and energy efficiency. PMID:22163992

  11. Molecular pathway activation - new type of biomarkers for tumor morphology and personalized selection of target drugs.

    PubMed

    Buzdin, Anton; Sorokin, Maxim; Garazha, Andrew; Sekacheva, Marina; Kim, Ella; Zhukov, Nikolay; Wang, Ye; Li, Xinmin; Kar, Souvik; Hartmann, Christian; Samii, Amir; Giese, Alf; Borisov, Nicolas

    2018-06-20

    Anticancer target drugs (ATDs) specifically bind and inhibit molecular targets that play important roles in cancer development and progression, being deeply implicated in intracellular signaling pathways. To date, hundreds of different ATDs were approved for clinical use in the different countries. Compared to previous chemotherapy treatments, ATDs often demonstrate reduced side effects and increased efficiency, but also have higher costs. However, the efficiency of ATDs for the advanced stage tumors is still insufficient. Different ATDs have different mechanisms of action and are effective in different cohorts of patients. Personalized approaches are therefore needed to select the best ATD candidates for the individual patients. In this review, we focus on a new generation of biomarkers - molecular pathway activation - and on their applications for predicting individual tumor response to ATDs. The success in high throughput gene expression profiling and emergence of novel bioinformatic tools reinforced quick development of pathway related field of molecular biomedicine. The ability to quantitatively measure degree of a pathway activation using gene expression data has revolutionized this field and made the corresponding analysis quick, robust and inexpensive. This success was further enhanced by using machine learning algorithms for selection of the best biomarkers. We review here the current progress in translating these studies to clinical oncology and patient-oriented adjustment of cancer therapy. Copyright © 2018. Published by Elsevier Ltd.

  12. Top-attack modeling and automatic target detection using synthetic FLIR scenery

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.; Penn, Joseph A.

    2004-09-01

    A series of experiments have been performed to verify the utility of algorithmic tools for the modeling and analysis of cold-target signatures in synthetic, top-attack, FLIR video sequences. The tools include: MuSES/CREATION for the creation of synthetic imagery with targets, an ARL target detection algorithm to detect imbedded synthetic targets in scenes, and an ARL scoring algorithm, using Receiver-Operating-Characteristic (ROC) curve analysis, to evaluate detector performance. Cold-target detection variability was examined as a function of target emissivity, surrounding clutter type, and target placement in non-obscuring clutter locations. Detector metrics were also individually scored so as to characterize the effect of signature/clutter variations. Results show that using these tools, a detailed, physically meaningful, target detection analysis is possible and that scenario specific target detectors may be developed by selective choice and/or weighting of detector metrics. However, developing these tools into a reliable predictive capability will require the extension of these results to the modeling and analysis of a large number of data sets configured for a wide range of target and clutter conditions. Finally, these tools should also be useful for the comparison of competitive detection algorithms by providing well defined, and controllable target detection scenarios, as well as for the training and testing of expert human observers.

  13. A Novel Hybrid Clonal Selection Algorithm with Combinatorial Recombination and Modified Hypermutation Operators for Global Optimization

    PubMed Central

    Lin, Jingjing; Jing, Honglei

    2016-01-01

    Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive. PMID:27698662

  14. Performance improvement of multi-class detection using greedy algorithm for Viola-Jones cascade selection

    NASA Astrophysics Data System (ADS)

    Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.

    2018-04-01

    This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.

  15. Enhanced Algorithms for EO/IR Electronic Stabilization, Clutter Suppression, and Track-Before-Detect for Multiple Low Observable Targets

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Brown, A.; Brown, J.

    The paper describes the development and evaluation of a suite of advanced algorithms which provide significantly-improved capabilities for finding, fixing, and tracking multiple ballistic and flying low observable objects in highly stressing cluttered environments. The algorithms have been developed for use in satellite-based staring and scanning optical surveillance suites for applications including theatre and intercontinental ballistic missile early warning, trajectory prediction, and multi-sensor track handoff for midcourse discrimination and intercept. The functions performed by the algorithms include electronic sensor motion compensation providing sub-pixel stabilization (to 1/100 of a pixel), as well as advanced temporal-spatial clutter estimation and suppression to below sensor noise levels, followed by statistical background modeling and Bayesian multiple-target track-before-detect filtering. The multiple-target tracking is performed in physical world coordinates to allow for multi-sensor fusion, trajectory prediction, and intercept. Output of detected object cues and data visualization are also provided. The algorithms are designed to handle a wide variety of real-world challenges. Imaged scenes may be highly complex and infinitely varied -- the scene background may contain significant celestial, earth limb, or terrestrial clutter. For example, when viewing combined earth limb and terrestrial scenes, a combination of stationary and non-stationary clutter may be present, including cloud formations, varying atmospheric transmittance and reflectance of sunlight and other celestial light sources, aurora, glint off sea surfaces, and varied natural and man-made terrain features. The targets of interest may also appear to be dim, relative to the scene background, rendering much of the existing deployed software useless for optical target detection and tracking. Additionally, it may be necessary to detect and track a large number of objects in the threat cloud

  16. Context-dependent sequential effects of target selection for action.

    PubMed

    Moher, Jeff; Song, Joo-Hyun

    2013-07-11

    Humans exhibit variation in behavior from moment to moment even when performing a simple, repetitive task. Errors are typically followed by cautious responses, minimizing subsequent distractor interference. However, less is known about how variation in the execution of an ultimately correct response affects subsequent behavior. We asked participants to reach toward a uniquely colored target presented among distractors and created two categories to describe participants' responses in correct trials based on analyses of movement trajectories; partial errors referred to trials in which observers initially selected a nontarget for action before redirecting the movement and accurately pointing to the target, and direct movements referred to trials in which the target was directly selected for action. We found that latency to initiate a hand movement was shorter in trials following partial errors compared to trials following direct movements. Furthermore, when the target and distractor colors were repeated, movement time and reach movement curvature toward distractors were greater following partial errors compared to direct movements. Finally, when the colors were repeated, partial errors were more frequent than direct movements following partial-error trials, and direct movements were more frequent following direct-movement trials. The dependence of these latter effects on repeated-task context indicates the involvement of higher-level cognitive mechanisms in an integrated attention-action system in which execution of a partial-error or direct-movement response affects memory representations that bias performance in subsequent trials. Altogether, these results demonstrate that whether a nontarget is selected for action or not has a measurable impact on subsequent behavior.

  17. Selecting Random Distributed Elements for HIFU using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yufeng

    2011-09-01

    As an effective and noninvasive therapeutic modality for tumor treatment, high-intensity focused ultrasound (HIFU) has attracted attention from both physicians and patients. New generations of HIFU systems with the ability to electrically steer the HIFU focus using phased array transducers have been under development. The presence of side and grating lobes may cause undesired thermal accumulation at the interface of the coupling medium (i.e. water) and skin, or in the intervening tissue. Although sparse randomly distributed piston elements could reduce the amplitude of grating lobes, there are theoretically no grating lobes with the use of concave elements in the new phased array HIFU. A new HIFU transmission strategy is proposed in this study, firing a number of but not all elements for a certain period and then changing to another group for the next firing sequence. The advantages are: 1) the asymmetric position of active elements may reduce the side lobes, and 2) each element has some resting time during the entire HIFU ablation (up to several hours for some clinical applications) so that the decreasing efficiency of the transducer due to thermal accumulation is minimized. Genetic algorithm was used for selecting randomly distributed elements in a HIFU array. Amplitudes of the first side lobes at the focal plane were used as the fitness value in the optimization. Overall, it is suggested that the proposed new strategy could reduce the side lobe and the consequent side-effects, and the genetic algorithm is effective in selecting those randomly distributed elements in a HIFU array.

  18. Selective tumor cell targeting by the disaccharide moiety of bleomycin.

    PubMed

    Yu, Zhiqiang; Schmaltz, Ryan M; Bozeman, Trevor C; Paul, Rakesh; Rishel, Michael J; Tsosie, Krystal S; Hecht, Sidney M

    2013-02-27

    In a recent study, the well-documented tumor targeting properties of the antitumor agent bleomycin (BLM) were studied in cell culture using microbubbles that had been derivatized with multiple copies of BLM. It was shown that BLM selectively targeted MCF-7 human breast carcinoma cells but not the "normal" breast cell line MCF-10A. Furthermore, it was found that the BLM analogue deglycobleomycin, which lacks the disaccharide moiety of BLM, did not target either cell line, indicating that the BLM disaccharide moiety is necessary for tumor selectivity. Not resolved in the earlier study were the issues of whether the BLM disaccharide moiety alone is sufficient for tumor cell targeting and the possible cellular uptake of the disaccharide. In the present study, we conjugated BLM, deglycoBLM, and BLM disaccharide to the cyanine dye Cy5**. It was found that the BLM and BLM disaccharide conjugates, but not the deglycoBLM conjugate, bound selectively to MCF-7 cells and were internalized. The same was also true for the prostate cancer cell line DU-145 (but not for normal PZ-HPV-7 prostate cells) and for the pancreatic cancer cell line BxPC-3 (but not for normal SVR A221a pancreas cells). The targeting efficiency of the disaccharide was only slightly less than that of BLM in MCF-7 and DU-145 cells and comparable to that of BLM in BxPC-3 cells. These results establish that the BLM disaccharide is both necessary and sufficient for tumor cell targeting, a finding with obvious implications for the design of novel tumor imaging and therapeutic agents.

  19. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    PubMed

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Selecting multiple features delays perception, but only when targets are horizontally arranged.

    PubMed

    Lo, Shih-Yu

    2017-01-01

    Based on the finding that perception is lagged by attention split on multiple features (Lo et al., 2012), this study investigated how the feature-based lag effect interacts with the target spatial arrangement. Participants were presented with gratings the spatial frequencies of which constantly changed. The task was to monitor two gratings of the same or different colors and report their spatial frequencies right before the stimulus offset. The results showed a perceptual lag wherein the reported value was closer to the physical value some time prior to the stimulus offset. This lag effect was larger when the two gratings were of different colors than when they were the same color. Furthermore, the feature-based lag effect was statistically significant when the two gratings were horizontally arranged but not when they were vertically or diagonally arranged. A model is proposed to explain the effect of target arrangement: When targets are horizontally arranged, selecting an additional feature delays perception. When targets are vertically or diagonally arranged, target selection for the lower field is prioritized. This prioritization on the lower target might prompt observers to only select the lower target and ignore the upper one, and this causes more perceptual errors without delaying perception. © 2017 Elsevier B.V. All rights reserved.

  1. Gene selection using hybrid binary black hole algorithm and modified binary particle swarm optimization.

    PubMed

    Pashaei, Elnaz; Pashaei, Elham; Aydin, Nizamettin

    2018-04-14

    In cancer classification, gene selection is an important data preprocessing technique, but it is a difficult task due to the large search space. Accordingly, the objective of this study is to develop a hybrid meta-heuristic Binary Black Hole Algorithm (BBHA) and Binary Particle Swarm Optimization (BPSO) (4-2) model that emphasizes gene selection. In this model, the BBHA is embedded in the BPSO (4-2) algorithm to make the BPSO (4-2) more effective and to facilitate the exploration and exploitation of the BPSO (4-2) algorithm to further improve the performance. This model has been associated with Random Forest Recursive Feature Elimination (RF-RFE) pre-filtering technique. The classifiers which are evaluated in the proposed framework are Sparse Partial Least Squares Discriminant Analysis (SPLSDA); k-nearest neighbor and Naive Bayes. The performance of the proposed method was evaluated on two benchmark and three clinical microarrays. The experimental results and statistical analysis confirm the better performance of the BPSO (4-2)-BBHA compared with the BBHA, the BPSO (4-2) and several state-of-the-art methods in terms of avoiding local minima, convergence rate, accuracy and number of selected genes. The results also show that the BPSO (4-2)-BBHA model can successfully identify known biologically and statistically significant genes from the clinical datasets. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Statistical Algorithms for Designing Geophysical Surveys to Detect UXO Target Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Robert F.; Carlson, Deborah K.; Gilbert, Richard O.

    2005-07-29

    The U.S. Department of Defense is in the process of assessing and remediating closed, transferred, and transferring military training ranges across the United States. Many of these sites have areas that are known to contain unexploded ordnance (UXO). Other sites or portions of sites are not expected to contain UXO, but some verification of this expectation using geophysical surveys is needed. Many sites are so large that it is often impractical and/or cost prohibitive to perform surveys over 100% of the site. In that case, it is particularly important to be explicit about the performance required of the survey. Thismore » article presents the statistical algorithms developed to support the design of geophysical surveys along transects (swaths) to find target areas (TAs) of anomalous geophysical readings that may indicate the presence of UXO. The algorithms described here determine 1) the spacing between transects that should be used for the surveys to achieve a specified probability of traversing the TA, 2) the probability of both traversing and detecting a TA of anomalous geophysical readings when the spatial density of anomalies within the TA is either uniform (unchanging over space) or has a bivariate normal distribution, and 3) the probability that a TA exists when it was not found by surveying along transects. These algorithms have been implemented in the Visual Sample Plan (VSP) software to develop cost-effective transect survey designs that meet performance objectives.« less

  3. Statistical Algorithms for Designing Geophysical Surveys to Detect UXO Target Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Robert F.; Carlson, Deborah K.; Gilbert, Richard O.

    2005-07-28

    The U.S. Department of Defense is in the process of assessing and remediating closed, transferred, and transferring military training ranges across the United States. Many of these sites have areas that are known to contain unexploded ordnance (UXO). Other sites or portions of sites are not expected to contain UXO, but some verification of this expectation using geophysical surveys is needed. Many sites are so large that it is often impractical and/or cost prohibitive to perform surveys over 100% of the site. In such cases, it is particularly important to be explicit about the performance required of the surveys. Thismore » article presents the statistical algorithms developed to support the design of geophysical surveys along transects (swaths) to find target areas (TAs) of anomalous geophysical readings that may indicate the presence of UXO. The algorithms described here determine (1) the spacing between transects that should be used for the surveys to achieve a specified probability of traversing the TA, (2) the probability of both traversing and detecting a TA of anomalous geophysical readings when the spatial density of anomalies within the TA is either uniform (unchanging over space) or has a bivariate normal distribution, and (3) the probability that a TA exists when it was not found by surveying along transects. These algorithms have been implemented in the Visual Sample Plan (VSP) software to develop cost-effective transect survey designs that meet performance objectives.« less

  4. WE-D-18A-04: How Iterative Reconstruction Algorithms Affect the MTFs of Variable-Contrast Targets in CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodge, C.T.; Rong, J.; Dodge, C.W.

    2014-06-15

    Purpose: To determine how filtered back-projection (FBP), adaptive statistical (ASiR), and model based (MBIR) iterative reconstruction algorithms affect the measured modulation transfer functions (MTFs) of variable-contrast targets over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP401 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 24 mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 2.5mm thickness, 0.984 pitch. The images were reconstructed using GE's Standard kernel with FBP; 20%, 40% andmore » 70% ASiR; and MBIR. A task-based MTF (MTFtask) was computed for six cylindrical targets: 2 low-contrast (Polystyrene, LDPE), 2 medium-contrast (Delrin, PMP), and 2 high-contrast (Teflon, air). MTFtask was used to compare the performance of reconstruction algorithms with decreasing CTDIvol from 24mGy, which is currently used in the clinic. Results: For the air target and 75% dose savings (6 mGy), MBIR MTFtask at 5 lp/cm measured 0.24, compared to 0.20 for 70% ASiR and 0.11 for FBP. Overall, for both high-contrast targets, MBIR MTFtask improved with increasing CTDIvol and consistently outperformed ASiR and FBP near the system's Nyquist frequency. Conversely, for Polystyrene at 6 mGy, MBIR (0.10) and 70% ASiR (0.07) MTFtask was lower than for FBP (0.18). For medium and low-contrast targets, FBP remains the best overall algorithm for improved resolution at low CTDIvol (1–6 mGy) levels, whereas MBIR is comparable at higher dose levels (12–24 mGy). Conclusion: MBIR improved the MTF of small, high-contrast targets compared to FBP and ASiR at doses of 50%–12.5% of those currently used in the clinic. However, for imaging low- and mediumcontrast targets, FBP performed the best across all dose levels. For assessing MTF from different reconstruction algorithms, task-based MTF measurements are necessary.« less

  5. L2-Boosting algorithm applied to high-dimensional problems in genomic selection.

    PubMed

    González-Recio, Oscar; Weigel, Kent A; Gianola, Daniel; Naya, Hugo; Rosa, Guilherme J M

    2010-06-01

    The L(2)-Boosting algorithm is one of the most promising machine-learning techniques that has appeared in recent decades. It may be applied to high-dimensional problems such as whole-genome studies, and it is relatively simple from a computational point of view. In this study, we used this algorithm in a genomic selection context to make predictions of yet to be observed outcomes. Two data sets were used: (1) productive lifetime predicted transmitting abilities from 4702 Holstein sires genotyped for 32 611 single nucleotide polymorphisms (SNPs) derived from the Illumina BovineSNP50 BeadChip, and (2) progeny averages of food conversion rate, pre-corrected by environmental and mate effects, in 394 broilers genotyped for 3481 SNPs. Each of these data sets was split into training and testing sets, the latter comprising dairy or broiler sires whose ancestors were in the training set. Two weak learners, ordinary least squares (OLS) and non-parametric (NP) regression were used for the L2-Boosting algorithm, to provide a stringent evaluation of the procedure. This algorithm was compared with BL [Bayesian LASSO (least absolute shrinkage and selection operator)] and BayesA regression. Learning tasks were carried out in the training set, whereas validation of the models was performed in the testing set. Pearson correlations between predicted and observed responses in the dairy cattle (broiler) data set were 0.65 (0.33), 0.53 (0.37), 0.66 (0.26) and 0.63 (0.27) for OLS-Boosting, NP-Boosting, BL and BayesA, respectively. The smallest bias and mean-squared errors (MSEs) were obtained with OLS-Boosting in both the dairy cattle (0.08 and 1.08, respectively) and broiler (-0.011 and 0.006) data sets, respectively. In the dairy cattle data set, the BL was more accurate (bias=0.10 and MSE=1.10) than BayesA (bias=1.26 and MSE=2.81), whereas no differences between these two methods were found in the broiler data set. L2-Boosting with a suitable learner was found to be a competitive

  6. Applications of machine-learning algorithms for infrared colour selection of Galactic Wolf-Rayet stars

    NASA Astrophysics Data System (ADS)

    Morello, Giuseppe; Morris, P. W.; Van Dyk, S. D.; Marston, A. P.; Mauerhan, J. C.

    2018-01-01

    We have investigated and applied machine-learning algorithms for infrared colour selection of Galactic Wolf-Rayet (WR) candidates. Objects taken from the Spitzer Galactic Legacy Infrared Midplane Survey Extraordinaire (GLIMPSE) catalogue of the infrared objects in the Galactic plane can be classified into different stellar populations based on the colours inferred from their broad-band photometric magnitudes [J, H and Ks from 2 Micron All Sky Survey (2MASS), and the four Spitzer/IRAC bands]. The algorithms tested in this pilot study are variants of the k-nearest neighbours approach, which is ideal for exploratory studies of classification problems where interrelations between variables and classes are complicated. The aims of this study are (1) to provide an automated tool to select reliable WR candidates and potentially other classes of objects, (2) to measure the efficiency of infrared colour selection at performing these tasks and (3) to lay the groundwork for statistically inferring the total number of WR stars in our Galaxy. We report the performance results obtained over a set of known objects and selected candidates for which we have carried out follow-up spectroscopic observations, and confirm the discovery of four new WR stars.

  7. Contextual control over selective attention: evidence from a two-target method.

    PubMed

    MacLellan, Ellen; Shore, David I; Milliken, Bruce

    2015-07-01

    Selective attention is generally studied with conflict tasks, using response time as the dependent measure. Here, we study the impact of selective attention to a first target, T1, presented simultaneously with a distractor, on the accuracy of subsequent encoding of a second target item, T2. This procedure produces an "attentional blink" (AB) effect much like that reported in other studies, and allowed us to study the influence of context on cognitive control with a novel method. In particular, we examined whether preparation to attend selectively to T1 had an impact on the selective encoding of T1 that would translate to report of T2. Preparation to attend selectively was manipulated by varying whether difficult selective attention T1 trials were presented in the context of other difficult selective attention T1 trials. The results revealed strong context effects of this nature, with smaller AB effects when difficult selective attention T1 trials were embedded in a context with many, rather than few, other difficult selective attention T1 trials. Further, the results suggest that both the trial-to-trial local context and the block-wide global context modulate performance in this task.

  8. An evolution based biosensor receptor DNA sequence generation algorithm.

    PubMed

    Kim, Eungyeong; Lee, Malrey; Gatton, Thomas M; Lee, Jaewan; Zang, Yupeng

    2010-01-01

    A biosensor is composed of a bioreceptor, an associated recognition molecule, and a signal transducer that can selectively detect target substances for analysis. DNA based biosensors utilize receptor molecules that allow hybridization with the target analyte. However, most DNA biosensor research uses oligonucleotides as the target analytes and does not address the potential problems of real samples. The identification of recognition molecules suitable for real target analyte samples is an important step towards further development of DNA biosensors. This study examines the characteristics of DNA used as bioreceptors and proposes a hybrid evolution-based DNA sequence generating algorithm, based on DNA computing, to identify suitable DNA bioreceptor recognition molecules for stable hybridization with real target substances. The Traveling Salesman Problem (TSP) approach is applied in the proposed algorithm to evaluate the safety and fitness of the generated DNA sequences. This approach improves efficiency and stability for enhanced and variable-length DNA sequence generation and allows extension to generation of variable-length DNA sequences with diverse receptor recognition requirements.

  9. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  10. Quick fuzzy backpropagation algorithm.

    PubMed

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  11. Multipass Target Search in Natural Environments

    PubMed Central

    Otte, Michael W.; Sofge, Donald; Gupta, Satyandra K.

    2017-01-01

    Consider a disaster scenario where search and rescue workers must search difficult to access buildings during an earthquake or flood. Often, finding survivors a few hours sooner results in a dramatic increase in saved lives, suggesting the use of drones for expedient rescue operations. Entropy can be used to quantify the generation and resolution of uncertainty. When searching for targets, maximizing mutual information of future sensor observations will minimize expected target location uncertainty by minimizing the entropy of the future estimate. Motion planning for multi-target autonomous search requires planning over an area with an imperfect sensor and may require multiple passes, which is hindered by the submodularity property of mutual information. Further, mission duration constraints must be handled accordingly, requiring consideration of the vehicle’s dynamics to generate feasible trajectories and must plan trajectories spanning the entire mission duration, something which most information gathering algorithms are incapable of doing. If unanticipated changes occur in an uncertain environment, new plans must be generated quickly. In addition, planning multipass trajectories requires evaluating path dependent rewards, requiring planning in the space of all previously selected actions, compounding the problem. We present an anytime algorithm for autonomous multipass target search in natural environments. The algorithm is capable of generating long duration dynamically feasible multipass coverage plans that maximize mutual information using a variety of techniques such as ϵ-admissible heuristics to speed up the search. To the authors’ knowledge this is the first attempt at efficiently solving multipass target search problems of such long duration. The proposed algorithm is based on best first branch and bound and is benchmarked against state of the art algorithms adapted to the problem in natural Simplex environments, gathering the most information in the

  12. A Hybrid Search Algorithm for Swarm Robots Searching in an Unknown Environment

    PubMed Central

    Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao

    2014-01-01

    This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency. PMID:25386855

  13. A hybrid search algorithm for swarm robots searching in an unknown environment.

    PubMed

    Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao

    2014-01-01

    This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency.

  14. Distributed Bees Algorithm Parameters Optimization for a Cost Efficient Target Allocation in Swarms of Robots

    PubMed Central

    Jevtić, Aleksandar; Gutiérrez, Álvaro

    2011-01-01

    Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA), previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce. PMID:22346677

  15. The Successive Projections Algorithm for interval selection in trilinear partial least-squares with residual bilinearization.

    PubMed

    Gomes, Adriano de Araújo; Alcaraz, Mirta Raquel; Goicoechea, Hector C; Araújo, Mario Cesar U

    2014-02-06

    In this work the Successive Projection Algorithm is presented for intervals selection in N-PLS for three-way data modeling. The proposed algorithm combines noise-reduction properties of PLS with the possibility of discarding uninformative variables in SPA. In addition, second-order advantage can be achieved by the residual bilinearization (RBL) procedure when an unexpected constituent is present in a test sample. For this purpose, SPA was modified in order to select intervals for use in trilinear PLS. The ability of the proposed algorithm, namely iSPA-N-PLS, was evaluated on one simulated and two experimental data sets, comparing the results to those obtained by N-PLS. In the simulated system, two analytes were quantitated in two test sets, with and without unexpected constituent. In the first experimental system, the determination of the four fluorophores (l-phenylalanine; l-3,4-dihydroxyphenylalanine; 1,4-dihydroxybenzene and l-tryptophan) was conducted with excitation-emission data matrices. In the second experimental system, quantitation of ofloxacin was performed in water samples containing two other uncalibrated quinolones (ciprofloxacin and danofloxacin) by high performance liquid chromatography with UV-vis diode array detector. For comparison purpose, a GA algorithm coupled with N-PLS/RBL was also used in this work. In most of the studied cases iSPA-N-PLS proved to be a promising tool for selection of variables in second-order calibration, generating models with smaller RMSEP, when compared to both the global model using all of the sensors in two dimensions and GA-NPLS/RBL. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Social exclusion impairs distractor suppression but not target enhancement in selective attention.

    PubMed

    Xu, Mengsi; Li, Zhiai; Diao, Liuting; Fan, Lingxia; Zhang, Lijie; Yuan, Shuge; Yang, Dong

    2017-11-01

    Social exclusion has been thought to weaken one's ability to exert inhibitory control. Existing studies have primarily focused on the relationship between exclusion and behavioral inhibition, and have reported that exclusion impairs behavioral inhibition. However, whether exclusion also affects selective attention, another important aspect of inhibitory control, remains unknown. Therefore, the current study aimed to explore whether social exclusion impairs selective attention, and to specifically examine its effect on two hypothesized mechanisms of selective attention: target enhancement and distractor suppression. The Cyberball game was used to manipulate social exclusion. Participants then performed a visual search task while event-related potentials were recorded. In the visual search task, target and salient distractor were either both presented laterally or one was presented on the vertical midline and the other laterally. Results showed that social exclusion differentially affected target and distractor processing. While exclusion impaired distractor suppression, reflected as smaller distractor-positivity (Pd) amplitudes for the exclusion group compared to the inclusion group, it did not affect target enhancement, reflected as similar target-negativity (Nt) amplitudes for both the exclusion and inclusion groups. Together, these results extend our understanding of the relationship between exclusion and inhibitory control, and suggest that social exclusion affects selective attention in a more complex manner than previously thought. Copyright © 2017. Published by Elsevier B.V.

  17. Use of sexually transmitted disease risk assessment algorithms for selection of intrauterine device candidates.

    PubMed

    Morrison, C S; Sekadde-Kigondu, C; Miller, W C; Weiner, D H; Sinei, S K

    1999-02-01

    Sexually transmitted diseases (STD) are an important contraindication for intrauterine device (IUD) insertion. Nevertheless, laboratory testing for STD is not possible in many settings. The objective of this study is to evaluate the use of risk assessment algorithms to predict STD and subsequent IUD-related complications among IUD candidates. Among 615 IUD users in Kenya, the following algorithms were evaluated: 1) an STD algorithm based on US Agency for International Development (USAID) Technical Working Group guidelines: 2) a Centers for Disease Control and Prevention (CDC) algorithm for management of chlamydia; and 3) a data-derived algorithm modeled from study data. Algorithms were evaluated for prediction of chlamydial and gonococcal infection at 1 month and complications (pelvic inflammatory disease [PID], IUD removals, and IUD expulsions) over 4 months. Women with STD were more likely to develop complications than women without STD (19% vs 6%; risk ratio = 2.9; 95% CI 1.3-6.5). For STD prediction, the USAID algorithm was 75% sensitive and 48% specific, with a positive likelihood ratio (LR+) of 1.4. The CDC algorithm was 44% sensitive and 72% specific, LR+ = 1.6. The data-derived algorithm was 91% sensitive and 56% specific, with LR+ = 2.0 and LR- = 0.2. Category-specific LR for this algorithm identified women with very low (< 1%) and very high (29%) infection probabilities. The data-derived algorithm was also the best predictor of IUD-related complications. These results suggest that use of STD algorithms may improve selection of IUD users. Women at high risk for STD could be counseled to avoid IUD, whereas women at moderate risk should be monitored closely and counseled to use condoms.

  18. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  19. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    PubMed

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  20. Scattering of targets over layered half space using a semi-analytic method in conjunction with FDTD algorithm.

    PubMed

    Cao, Le; Wei, Bing

    2014-08-25

    Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.

  1. CPAC: Energy-Efficient Data Collection through Adaptive Selection of Compression Algorithms for Sensor Networks

    PubMed Central

    Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon

    2014-01-01

    We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763

  2. Diverse Actions and Target-Site Selectivity of Neonicotinoids: Structural Insights

    PubMed Central

    Matsuda, Kazuhiko; Kanaoka, Satoshi; Akamatsu, Miki; Sattelle, David B.

    2009-01-01

    The nicotinic acetylcholine receptors (nAChRs) are targets for human and veterinary medicines as well as insecticides. Subtype-selectivity among the diverse nAChR family members is important for medicines targeting particular disorders, and pest-insect selectivity is essential for the development of safer, environmentally acceptable insecticides. Neonicotinoid insecticides selectively targeting insect nAChRs have important applications in crop protection and animal health. Members of this class exhibit strikingly diverse actions on their nAChR targets. Here we review the chemistry and diverse actions of neonicotinoids on insect and mammalian nAChRs. Electrophysiological studies on native nAChRs and on wild-type and mutagenized recombinant nAChRs have shown that basic residues particular to loop D of insect nAChRs are likely to interact electrostatically with the nitro group of neonicotinoids. In 2008, the crystal structures were published showing neonicotinoids docking into the acetylcholine binding site of molluscan acetylcholine binding proteins with homology to the ligand binding domain (LBD) of nAChRs. The crystal structures showed that 1) glutamine in loop D, corresponding to the basic residues of insect nAChRs, hydrogen bonds with the NO2 group of imidacloprid and 2) neonicotinoid-unique stacking and CH-π bonds at the LBD. A neonicotinoid-resistant strain obtained by laboratory-screening has been found to result from target site mutations, and possible reasons for this are also suggested by the crystal structures. The prospects of designing neonicotinoids that are safe not only for mammals but also for beneficial insects such as honey bees (Apis mellifera) are discussed in terms of interactions with non-α nAChR subunits. PMID:19321668

  3. Detection and tracking of a moving target using SAR images with the particle filter-based track-before-detect algorithm.

    PubMed

    Gao, Han; Li, Jingwen

    2014-06-19

    A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.

  4. Detection and Tracking of a Moving Target Using SAR Images with the Particle Filter-Based Track-Before-Detect Algorithm

    PubMed Central

    Gao, Han; Li, Jingwen

    2014-01-01

    A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640

  5. Decentralized cooperative TOA/AOA target tracking for hierarchical wireless sensor networks.

    PubMed

    Chen, Ying-Chih; Wen, Chih-Yu

    2012-11-08

    This paper proposes a distributed method for cooperative target tracking in hierarchical wireless sensor networks. The concept of leader-based information processing is conducted to achieve object positioning, considering a cluster-based network topology. Random timers and local information are applied to adaptively select a sub-cluster for the localization task. The proposed energy-efficient tracking algorithm allows each sub-cluster member to locally estimate the target position with a Bayesian filtering framework and a neural networking model, and further performs estimation fusion in the leader node with the covariance intersection algorithm. This paper evaluates the merits and trade-offs of the protocol design towards developing more efficient and practical algorithms for object position estimation.

  6. Altering spatial priority maps via statistical learning of target selection and distractor filtering.

    PubMed

    Ferrante, Oscar; Patacca, Alessia; Di Caro, Valeria; Della Libera, Chiara; Santandrea, Elisa; Chelazzi, Leonardo

    2018-05-01

    The cognitive system has the capacity to learn and make use of environmental regularities - known as statistical learning (SL), including for the implicit guidance of attention. For instance, it is known that attentional selection is biased according to the spatial probability of targets; similarly, changes in distractor filtering can be triggered by the unequal spatial distribution of distractors. Open questions remain regarding the cognitive/neuronal mechanisms underlying SL of target selection and distractor filtering. Crucially, it is unclear whether the two processes rely on shared neuronal machinery, with unavoidable cross-talk, or they are fully independent, an issue that we directly addressed here. In a series of visual search experiments, participants had to discriminate a target stimulus, while ignoring a task-irrelevant salient distractor (when present). We systematically manipulated spatial probabilities of either one or the other stimulus, or both. We then measured performance to evaluate the direct effects of the applied contingent probability distribution (e.g., effects on target selection of the spatial imbalance in target occurrence across locations) as well as its indirect or "transfer" effects (e.g., effects of the same spatial imbalance on distractor filtering across locations). By this approach, we confirmed that SL of both target and distractor location implicitly bias attention. Most importantly, we described substantial indirect effects, with the unequal spatial probability of the target affecting filtering efficiency and, vice versa, the unequal spatial probability of the distractor affecting target selection efficiency across locations. The observed cross-talk demonstrates that SL of target selection and distractor filtering are instantiated via (at least partly) shared neuronal machinery, as further corroborated by strong correlations between direct and indirect effects at the level of individual participants. Our findings are compatible

  7. Image Analyzed by Mars Rover for Selection of Target

    NASA Image and Video Library

    2010-03-23

    NASA Opportunity used newly developed and uploaded software called AEGIS, to analyze images to identify features that best matched criteria for selecting an observation target; the criteria in this image -- rocks that are larger and darker than others.

  8. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata

    PubMed Central

    Liu, Aiming; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi

    2017-01-01

    Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain–computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain–computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain–computer interface systems. PMID:29117100

  9. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata.

    PubMed

    Liu, Aiming; Chen, Kun; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi

    2017-11-08

    Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain-computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain-computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain-computer interface systems.

  10. Utilization of Ancillary Data Sets for SMAP Algorithm Development and Product Generation

    NASA Technical Reports Server (NTRS)

    ONeill, P.; Podest, E.; Njoku, E.

    2011-01-01

    Algorithms being developed for the Soil Moisture Active Passive (SMAP) mission require a variety of both static and ancillary data. The selection of the most appropriate source for each ancillary data parameter is driven by a number of considerations, including accuracy, latency, availability, and consistency across all SMAP products and with SMOS (Soil Moisture Ocean Salinity). It is anticipated that initial selection of all ancillary datasets, which are needed for ongoing algorithm development activities on the SMAP algorithm testbed at JPL, will be completed within the year. These datasets will be updated as new or improved sources become available, and all selections and changes will be documented for the benefit of the user community. Wise choices in ancillary data will help to enable SMAP to provide new global measurements of soil moisture and freeze/thaw state at the targeted accuracy necessary to tackle hydrologically-relevant societal issues.

  11. Predicting selective drug targets in cancer through metabolic networks

    PubMed Central

    Folger, Ori; Jerby, Livnat; Frezza, Christian; Gottlieb, Eyal; Ruppin, Eytan; Shlomi, Tomer

    2011-01-01

    The interest in studying metabolic alterations in cancer and their potential role as novel targets for therapy has been rejuvenated in recent years. Here, we report the development of the first genome-scale network model of cancer metabolism, validated by correctly identifying genes essential for cellular proliferation in cancer cell lines. The model predicts 52 cytostatic drug targets, of which 40% are targeted by known, approved or experimental anticancer drugs, and the rest are new. It further predicts combinations of synthetic lethal drug targets, whose synergy is validated using available drug efficacy and gene expression measurements across the NCI-60 cancer cell line collection. Finally, potential selective treatments for specific cancers that depend on cancer type-specific downregulation of gene expression and somatic mutations are compiled. PMID:21694718

  12. An assessment of spacecraft target mode selection methods

    NASA Astrophysics Data System (ADS)

    Mercer, J. F.; Aglietti, G. S.; Remedia, M.; Kiley, A.

    2017-11-01

    Coupled Loads Analyses (CLAs), using finite element models (FEMs) of the spacecraft and launch vehicle to simulate critical flight events, are performed in order to determine the dynamic loadings that will be experienced by spacecraft during launch. A validation process is carried out on the spacecraft FEM beforehand to ensure that the dynamics of the analytical model sufficiently represent the behavior of the physical hardware. One aspect of concern is the containment of the FEM correlation and update effort to focus on the vibration modes which are most likely to be excited under test and CLA conditions. This study therefore provides new insight into the prioritization of spacecraft FEM modes for correlation to base-shake vibration test data. The work involved example application to large, unique, scientific spacecraft, with modern FEMs comprising over a million degrees of freedom. This comprehensive investigation explores: the modes inherently important to the spacecraft structures, irrespective of excitation; the particular 'critical modes' which produce peak responses to CLA level excitation; an assessment of several traditional target mode selection methods in terms of ability to predict these 'critical modes'; and an indication of the level of correlation these FEM modes achieve compared to corresponding test data. Findings indicate that, although the traditional methods of target mode selection have merit and are able to identify many of the modes of significance to the spacecraft, there are 'critical modes' which may be missed by conventional application of these methods. The use of different thresholds to select potential target modes from these parameters would enable identification of many of these missed modes. Ultimately, some consideration of the expected excitations is required to predict all modes likely to contribute to the response of the spacecraft in operation.

  13. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    PubMed

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  14. Non-targeted evaluation of selectivity of water-compatible class selective adsorbents for the analysis of steroids in wastewater.

    PubMed

    Kopperi, Matias; Riekkola, Marja-Liisa

    2016-05-12

    Selective adsorbents for solid-phase extraction are needed to meet the low concentration requirements of new environmental quality standard directives, especially for the analysis of estrogens in wastewater. In this work, bulk polymerization procedures were first optimized for the synthesis of non-imprinted polymers (NIP) with low non-specific adsorption of nonpolar compounds in aqueous environments. Water-compatible molecularly imprinted polymers (MIP) were then synthetized by increasing the selectivity of the polymer towards steroids with a testosterone template (average imprinting factor > 10). In addition, the affinity of synthetized entrapped β-cyclodextrin-epichlorohydrin polymers (ECD) towards steroids was clarified. The polymers were applied to the extraction of spiked wastewater effluent samples and their performance compared to commercially available adsorbents. The selectivity of the studied adsorbents was evaluated utilizing liquid chromatography ‒ mass spectrometry as well as comprehensive two-dimensional gas chromatography ‒ time-of-flight mass spectrometry. Affinity between adsorbents and steroids as well as matrix removal potential were measured with targeted methodologies, and two novel non-targeted methodologies were proposed to quantitatively measure adsorbent selectivity by utilizing chemometrics. Semi-quantitative selectivity was measured from the ratio of peak areas between steroidal and other compounds. Semi-qualitative selectivity was calculated from the ratio between the number of tentatively identified steroidal and other compounds. The synthetized polymers provided good matrix removal potential (ion suppression 15-30%) and semi-qualitative selectivity (∼4 units) compared to the commercial adsorbents (ion suppression 45-80%, selectivity < 3 units). Simple non-targeted approaches provided a novel method of quantifying the selectivity of extraction. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. QSRR modeling for the chromatographic retention behavior of some β-lactam antibiotics using forward and firefly variable selection algorithms coupled with multiple linear regression.

    PubMed

    Fouad, Marwa A; Tolba, Enas H; El-Shal, Manal A; El Kerdawy, Ahmed M

    2018-05-11

    The justified continuous emerging of new β-lactam antibiotics provokes the need for developing suitable analytical methods that accelerate and facilitate their analysis. A face central composite experimental design was adopted using different levels of phosphate buffer pH, acetonitrile percentage at zero time and after 15 min in a gradient program to obtain the optimum chromatographic conditions for the elution of 31 β-lactam antibiotics. Retention factors were used as the target property to build two QSRR models utilizing the conventional forward selection and the advanced nature-inspired firefly algorithm for descriptor selection, coupled with multiple linear regression. The obtained models showed high performance in both internal and external validation indicating their robustness and predictive ability. Williams-Hotelling test and student's t-test showed that there is no statistical significant difference between the models' results. Y-randomization validation showed that the obtained models are due to significant correlation between the selected molecular descriptors and the analytes' chromatographic retention. These results indicate that the generated FS-MLR and FFA-MLR models are showing comparable quality on both the training and validation levels. They also gave comparable information about the molecular features that influence the retention behavior of β-lactams under the current chromatographic conditions. We can conclude that in some cases simple conventional feature selection algorithm can be used to generate robust and predictive models comparable to that are generated using advanced ones. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. A standard deviation selection in evolutionary algorithm for grouper fish feed formulation

    NASA Astrophysics Data System (ADS)

    Cai-Juan, Soong; Ramli, Razamin; Rahman, Rosshairy Abdul

    2016-10-01

    Malaysia is one of the major producer countries for fishery production due to its location in the equatorial environment. Grouper fish is one of the potential markets in contributing to the income of the country due to its desirable taste, high demand and high price. However, the demand of grouper fish is still insufficient from the wild catch. Therefore, there is a need to farm grouper fish to cater to the market demand. In order to farm grouper fish, there is a need to have prior knowledge of the proper nutrients needed because there is no exact data available. Therefore, in this study, primary data and secondary data are collected even though there is a limitation of related papers and 30 samples are investigated by using standard deviation selection in Evolutionary algorithm. Thus, this study would unlock frontiers for an extensive research in respect of grouper fish feed formulation. Results shown that the fitness of standard deviation selection in evolutionary algorithm is applicable. The feasible and low fitness, quick solution can be obtained. These fitness can be further predicted to minimize cost in farming grouper fish.

  17. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  18. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    NASA Astrophysics Data System (ADS)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  19. SU-E-J-252: A Motion Algorithm to Extract Physical and Motion Parameters of a Mobile Target in Cone-Beam Computed Tomographic Imaging Retrospective to Image Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Ahmad, S; Alsbou, N

    Purpose: A motion algorithm was developed to extract actual length, CT-numbers and motion amplitude of a mobile target imaged with cone-beam-CT (CBCT) retrospective to image-reconstruction. Methods: The motion model considered a mobile target moving with a sinusoidal motion and employed three measurable parameters: apparent length, CT number level and gradient of a mobile target obtained from CBCT images to extract information about the actual length and CT number value of the stationary target and motion amplitude. The algorithm was verified experimentally with a mobile phantom setup that has three targets with different sizes manufactured from homogenous tissue-equivalent gel material embeddedmore » into a thorax phantom. The phantom moved sinusoidal in one-direction using eight amplitudes (0–20mm) and a frequency of 15-cycles-per-minute. The model required imaging parameters such as slice thickness, imaging time. Results: This motion algorithm extracted three unknown parameters: length of the target, CT-number-level, motion amplitude for a mobile target retrospective to CBCT image reconstruction. The algorithm relates three unknown parameters to measurable apparent length, CT-number-level and gradient for well-defined mobile targets obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on actual length of the target and motion amplitude. The cumulative CT-number for a mobile target was dependent on CT-number-level of the stationary target and motion amplitude. The gradient of the CT-distribution of mobile target is dependent on the stationary CT-number-level, actual target length along the direction of motion, and motion amplitude. Motion frequency and phase did not affect the elongation and CT-number distributions of mobile targets when imaging time included several motion cycles. Conclusion: The motion algorithm developed in this study has potential applications in diagnostic CT imaging and radiotherapy to

  20. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2017-04-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  1. Antibody Drug Conjugates: Application of Quantitative Pharmacology in Modality Design and Target Selection.

    PubMed

    Sadekar, S; Figueroa, I; Tabrizi, M

    2015-07-01

    Antibody drug conjugates (ADCs) are a multi-component modality comprising of an antibody targeting a cell-specific antigen, a potent drug/payload, and a linker that can be processed within cellular compartments to release payload upon internalization. Numerous ADCs are being evaluated in both research and clinical settings within the academic and pharmaceutical industry due to their ability to selectively deliver potent payloads. Hence, there is a clear need to incorporate quantitative approaches during early stages of drug development for effective modality design and target selection. In this review, we describe a quantitative approach and framework for evaluation of the interplay between drug- and systems-dependent properties (i.e., target expression, density, localization, turnover, and affinity) in order to deliver a sufficient amount of a potent payload into the relevant target cells. As discussed, theoretical approaches with particular considerations given to various key properties for the target and modality suggest that delivery of the payload into particular effect cells to be more sensitive to antigen concentrations for targets with slow turnover rates as compared to those with faster internalization rates. Further assessments also suggest that increasing doses beyond the threshold of the target capacity (a function of target internalization and expression) may not impact the maximum amount of payload delivered to the intended effect cells. This article will explore the important application of quantitative sciences in selection of the target and design of ADC modalities.

  2. Algorithm for selection of optimized EPR distance restraints for de novo protein structure determination

    PubMed Central

    Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.

    2010-01-01

    A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624

  3. TARPARE: a method for selecting target audiences for public health interventions.

    PubMed

    Donovan, R J; Egger, G; Francas, M

    1999-06-01

    This paper presents a model to assist the health promotion practitioner systematically compare and select what might be appropriate target groups when there are a number of segments competing for attention and resources. TARPARE assesses previously identified segments on the following criteria: T: The Total number of persons in the segment; AR: The proportion of At Risk persons in the segment; P: The Persuability of the target audience; A: The Accessibility of the target audience; R: Resources required to meet the needs of the target audience; and E: Equity, social justice considerations. The assessment can be applied qualitatively or can be applied such that scores can be assigned to each segment. Two examples are presented. TARPARE is a useful and flexible model for understanding the various segments in a population of interest and for assessing the potential viability of interventions directed at each segment. The model is particularly useful when there is a need to prioritise segments in terms of available budgets. The model provides a disciplined approach to target selection and forces consideration of what weights should be applied to the different criteria, and how these might vary for different issues or for different objectives. TARPARE also assesses segments in terms of an overall likelihood of optimal impact for each segment. Targeting high scoring segments is likely to lead to greater program success than targeting low scoring segments.

  4. Novel particle tracking algorithm based on the Random Sample Consensus Model for the Active Target Time Projection Chamber (AT-TPC)

    NASA Astrophysics Data System (ADS)

    Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco

    2018-02-01

    The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.

  5. An application of locally linear model tree algorithm with combination of feature selection in credit scoring

    NASA Astrophysics Data System (ADS)

    Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad

    2014-10-01

    Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.

  6. Access Selection Algorithm of Heterogeneous Wireless Networks for Smart Distribution Grid Based on Entropy-Weight and Rough Set

    NASA Astrophysics Data System (ADS)

    Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang

    2017-11-01

    To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.

  7. Optimal feature selection using a modified differential evolution algorithm and its effectiveness for prediction of heart disease.

    PubMed

    Vivekanandan, T; Sriman Narayana Iyengar, N Ch

    2017-11-01

    Enormous data growth in multiple domains has posed a great challenge for data processing and analysis techniques. In particular, the traditional record maintenance strategy has been replaced in the healthcare system. It is vital to develop a model that is able to handle the huge amount of e-healthcare data efficiently. In this paper, the challenging tasks of selecting critical features from the enormous set of available features and diagnosing heart disease are carried out. Feature selection is one of the most widely used pre-processing steps in classification problems. A modified differential evolution (DE) algorithm is used to perform feature selection for cardiovascular disease and optimization of selected features. Of the 10 available strategies for the traditional DE algorithm, the seventh strategy, which is represented by DE/rand/2/exp, is considered for comparative study. The performance analysis of the developed modified DE strategy is given in this paper. With the selected critical features, prediction of heart disease is carried out using fuzzy AHP and a feed-forward neural network. Various performance measures of integrating the modified differential evolution algorithm with fuzzy AHP and a feed-forward neural network in the prediction of heart disease are evaluated in this paper. The accuracy of the proposed hybrid model is 83%, which is higher than that of some other existing models. In addition, the prediction time of the proposed hybrid model is also evaluated and has shown promising results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy and minimization of gate fee.

    PubMed

    Kyriakis, Efstathios; Psomopoulos, Constantinos; Kokkotis, Panagiotis; Bourtsalas, Athanasios; Themelis, Nikolaos

    2017-06-23

    This study attempts the development of an algorithm in order to present a step by step selection method for the location and the size of a waste-to-energy facility targeting the maximum output energy, also considering the basic obstacle which is in many cases, the gate fee. Various parameters identified and evaluated in order to formulate the proposed decision making method in the form of an algorithm. The principle simulation input is the amount of municipal solid wastes (MSW) available for incineration and along with its net calorific value are the most important factors for the feasibility of the plant. Moreover, the research is focused both on the parameters that could increase the energy production and those that affect the R1 energy efficiency factor. Estimation of the final gate fee is achieved through the economic analysis of the entire project by investigating both expenses and revenues which are expected according to the selected site and outputs of the facility. In this point, a number of commonly revenue methods were included in the algorithm. The developed algorithm has been validated using three case studies in Greece-Athens, Thessaloniki, and Central Greece, where the cities of Larisa and Volos have been selected for the application of the proposed decision making tool. These case studies were selected based on a previous publication made by two of the authors, in which these areas where examined. Results reveal that the development of a «solid» methodological approach in selecting the site and the size of waste-to-energy (WtE) facility can be feasible. However, the maximization of the energy efficiency factor R1 requires high utilization factors while the minimization of the final gate fee requires high R1 and high metals recovery from the bottom ash as well as economic exploitation of recovered raw materials if any.

  9. A difference tracking algorithm based on discrete sine transform

    NASA Astrophysics Data System (ADS)

    Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun

    2018-04-01

    Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.

  10. Wavelength band selection method for multispectral target detection.

    PubMed

    Karlholm, Jörgen; Renhorn, Ingmar

    2002-11-10

    A framework is proposed for the selection of wavelength bands for multispectral sensors by use of hyperspectral reference data. Using the results from the detection theory we derive a cost function that is minimized by a set of spectral bands optimal in terms of detection performance for discrimination between a class of small rare targets and clutter with known spectral distribution. The method may be used, e.g., in the design of multispectral infrared search and track and electro-optical missile warning sensors, where a low false-alarm rate and a high-detection probability for detection of small targets against a clutter background are of critical importance, but the required high frame rate prevents the use of hyperspectral sensors.

  11. Corticostriatal circuit mechanisms of value-based action selection: Implementation of reinforcement learning algorithms and beyond.

    PubMed

    Morita, Kenji; Jitsev, Jenia; Morrison, Abigail

    2016-09-15

    Value-based action selection has been suggested to be realized in the corticostriatal local circuits through competition among neural populations. In this article, we review theoretical and experimental studies that have constructed and verified this notion, and provide new perspectives on how the local-circuit selection mechanisms implement reinforcement learning (RL) algorithms and computations beyond them. The striatal neurons are mostly inhibitory, and lateral inhibition among them has been classically proposed to realize "Winner-Take-All (WTA)" selection of the maximum-valued action (i.e., 'max' operation). Although this view has been challenged by the revealed weakness, sparseness, and asymmetry of lateral inhibition, which suggest more complex dynamics, WTA-like competition could still occur on short time scales. Unlike the striatal circuit, the cortical circuit contains recurrent excitation, which may enable retention or temporal integration of information and probabilistic "soft-max" selection. The striatal "max" circuit and the cortical "soft-max" circuit might co-implement an RL algorithm called Q-learning; the cortical circuit might also similarly serve for other algorithms such as SARSA. In these implementations, the cortical circuit presumably sustains activity representing the executed action, which negatively impacts dopamine neurons so that they can calculate reward-prediction-error. Regarding the suggested more complex dynamics of striatal, as well as cortical, circuits on long time scales, which could be viewed as a sequence of short WTA fragments, computational roles remain open: such a sequence might represent (1) sequential state-action-state transitions, constituting replay or simulation of the internal model, (2) a single state/action by the whole trajectory, or (3) probabilistic sampling of state/action. Copyright © 2016. Published by Elsevier B.V.

  12. Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2014-07-01

    Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.

  13. Quasar target selection fiber efficiency

    NASA Astrophysics Data System (ADS)

    Newberg, Heidi; Yanny, Brian

    1996-05-01

    We present estimates of the efficiency for finding QSOs as a function of limiting magnitude and galactic latitude. From these estimates, we have formulated a target selection strategy that should net 80,000 QSOs in the north galactic cap with an average of 70 fibers per plate, not including fibers reserved for high-redshift quasars. With this plan, we expect 54% of the targets to be QSOs. The North Galactic Cap is divided into two zones of high and low stellar density. We use about five times as many fibers for QSO candidates in the half of the survey with the lower stellar density as we use in the half with higher stellar density. The current plan assigns 15% of the fibers to FIRST radio sources; if these are not available, those fibers would be allocated to lower probability QSO sources, dropping the total number of QSOs by a small factor (5%). We will find about 17,000 additional quasars in the southern strips, and maybe a few more at very high redshift. Use was made of two data sets: the star and quasar simulated test data generated by Don Schneider, and the data from UJFN plate surveys by Koo (1986) and Kron (1980). This data was compared to results from the Palomar-Green Survey and a recent survey by Pat Osmer and collaborators.

  14. Analysis of A Drug Target-based Classification System using Molecular Descriptors.

    PubMed

    Lu, Jing; Zhang, Pin; Bi, Yi; Luo, Xiaomin

    2016-01-01

    Drug-target interaction is an important topic in drug discovery and drug repositioning. KEGG database offers a drug annotation and classification using a target-based classification system. In this study, we gave an investigation on five target-based classes: (I) G protein-coupled receptors; (II) Nuclear receptors; (III) Ion channels; (IV) Enzymes; (V) Pathogens, using molecular descriptors to represent each drug compound. Two popular feature selection methods, maximum relevance minimum redundancy and incremental feature selection, were adopted to extract the important descriptors. Meanwhile, an optimal prediction model based on nearest neighbor algorithm was constructed, which got the best result in identifying drug target-based classes. Finally, some key descriptors were discussed to uncover their important roles in the identification of drug-target classes.

  15. Bands selection and classification of hyperspectral images based on hybrid kernels SVM by evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Yan-Yan; Li, Dong-Sheng

    2016-01-01

    The hyperspectral images(HSI) consist of many closely spaced bands carrying the most object information. While due to its high dimensionality and high volume nature, it is hard to get satisfactory classification performance. In order to reduce HSI data dimensionality preparation for high classification accuracy, it is proposed to combine a band selection method of artificial immune systems (AIS) with a hybrid kernels support vector machine (SVM-HK) algorithm. In fact, after comparing different kernels for hyperspectral analysis, the approach mixed radial basis function kernel (RBF-K) with sigmoid kernel (Sig-K) and applied the optimized hybrid kernels in SVM classifiers. Then the SVM-HK algorithm used to induce the bands selection of an improved version of AIS. The AIS was composed of clonal selection and elite antibody mutation, including evaluation process with optional index factor (OIF). Experimental classification performance was on a San Diego Naval Base acquired by AVIRIS, the HRS dataset shows that the method is able to efficiently achieve bands redundancy removal while outperforming the traditional SVM classifier.

  16. A High-Speed Target-Free Vision-Based Sensor for Bus Rapid Transit Viaduct Vibration Measurements Using CMT and ORB Algorithms.

    PubMed

    Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan

    2017-06-06

    Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.

  17. A High-Speed Target-Free Vision-Based Sensor for Bus Rapid Transit Viaduct Vibration Measurements Using CMT and ORB Algorithms

    PubMed Central

    Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan

    2017-01-01

    Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable. PMID:28587275

  18. The SAMI Galaxy Survey: instrument specification and target selection

    NASA Astrophysics Data System (ADS)

    Bryant, J. J.; Owers, M. S.; Robotham, A. S. G.; Croom, S. M.; Driver, S. P.; Drinkwater, M. J.; Lorente, N. P. F.; Cortese, L.; Scott, N.; Colless, M.; Schaefer, A.; Taylor, E. N.; Konstantopoulos, I. S.; Allen, J. T.; Baldry, I.; Barnes, L.; Bauer, A. E.; Bland-Hawthorn, J.; Bloom, J. V.; Brooks, A. M.; Brough, S.; Cecil, G.; Couch, W.; Croton, D.; Davies, R.; Ellis, S.; Fogarty, L. M. R.; Foster, C.; Glazebrook, K.; Goodwin, M.; Green, A.; Gunawardhana, M. L.; Hampton, E.; Ho, I.-T.; Hopkins, A. M.; Kewley, L.; Lawrence, J. S.; Leon-Saval, S. G.; Leslie, S.; McElroy, R.; Lewis, G.; Liske, J.; López-Sánchez, Á. R.; Mahajan, S.; Medling, A. M.; Metcalfe, N.; Meyer, M.; Mould, J.; Obreschkow, D.; O'Toole, S.; Pracy, M.; Richards, S. N.; Shanks, T.; Sharp, R.; Sweet, S. M.; Thomas, A. D.; Tonini, C.; Walcher, C. J.

    2015-03-01

    The SAMI Galaxy Survey will observe 3400 galaxies with the Sydney-AAO Multi-object Integral-field spectrograph (SAMI) on the Anglo-Australian Telescope in a 3-yr survey which began in 2013. We present the throughput of the SAMI system, the science basis and specifications for the target selection, the survey observation plan and the combined properties of the selected galaxies. The survey includes four volume-limited galaxy samples based on cuts in a proxy for stellar mass, along with low-stellar-mass dwarf galaxies all selected from the Galaxy And Mass Assembly (GAMA) survey. The GAMA regions were selected because of the vast array of ancillary data available, including ultraviolet through to radio bands. These fields are on the celestial equator at 9, 12 and 14.5 h, and cover a total of 144 deg2 (in GAMA-I). Higher density environments are also included with the addition of eight clusters. The clusters have spectroscopy from 2-degree Field Galaxy Redshift Survey (2dFGRS) and Sloan Digital Sky Survey (SDSS) and photometry in regions covered by the SDSS and/or VLT Survey Telescope/ATLAS. The aim is to cover a broad range in stellar mass and environment, and therefore the primary survey targets cover redshifts 0.004 < z < 0.095, magnitudes rpet < 19.4, stellar masses 107-1012 M⊙, and environments from isolated field galaxies through groups to clusters of ˜1015 M⊙.

  19. Selectivity on-target of bromodomain chemical probes by structure-guided medicinal chemistry and chemical biology

    PubMed Central

    Galdeano, Carles; Ciulli, Alessio

    2017-01-01

    Targeting epigenetic proteins is a rapidly growing area for medicinal chemistry and drug discovery. Recent years have seen an explosion of interest in developing small molecules binding to bromodomains, the readers of acetyl-lysine modifications. A plethora of co-crystal structures has motivated focused fragment-based design and optimization programs within both industry and academia. These efforts have yielded several compounds entering the clinic, and many more are increasingly being used as chemical probes to interrogate bromodomain biology. High selectivity of chemical probes is necessary to ensure biological activity is due to an on-target effect. Here, we review the state-of-the-art of bromodomain-targeting compounds, focusing on the structural basis for their on-target selectivity or lack thereof. We also highlight chemical biology approaches to enhance on-target selectivity. PMID:27193077

  20. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.

  1. Directional enhancement of selected high-order-harmonics from intense laser irradiated blazed grating targets.

    PubMed

    Zhang, Guobo; Chen, Min; Liu, Feng; Yuan, Xiaohui; Weng, Suming; Zheng, Jun; Ma, Yanyun; Shao, Fuqiu; Sheng, Zhengming; Zhang, Jie

    2017-10-02

    Relativistically intense laser solid target interaction has been proved to be a promising way to generate high-order harmonics, which can be used to diagnose ultrafast phenomena. However, their emission direction and spectra still lack tunability. Based upon two-dimensional particle-in-cell simulations, we show that directional enhancement of selected high-order-harmonics can be realized using blazed grating targets. Such targets can select harmonics with frequencies being integer times of the grating frequency. Meanwhile, the radiation intensity and emission area of the harmonics are increased. The emission direction is controlled by tailoring the local blazed structure. Theoretical and electron dynamics analysis for harmonics generation, selection and directional enhancement from the interaction between multi-cycle laser and grating target are carried out. These studies will benefit the generation and application of laser plasma-based high order harmonics.

  2. Signatures of DNA target selectivity by ETS transcription factors

    PubMed Central

    Kim, Hye Mi

    2017-01-01

    ABSTRACT The ETS family of transcription factors is a functionally heterogeneous group of gene regulators that share a structurally conserved, eponymous DNA-binding domain. DNA target specificity derives from combinatorial interactions with other proteins as well as intrinsic heterogeneity among ETS domains. Emerging evidence suggests molecular hydration as a fundamental feature that defines the intrinsic heterogeneity in DNA target selection and susceptibility to epigenetic DNA modification. This perspective invokes novel hypotheses in the regulation of ETS proteins in physiologic osmotic stress, their pioneering potential in heterochromatin, and the effects of passive and pharmacologic DNA demethylation on ETS regulation. PMID:28301293

  3. Signatures of DNA target selectivity by ETS transcription factors.

    PubMed

    Poon, Gregory M K; Kim, Hye Mi

    2017-05-27

    The ETS family of transcription factors is a functionally heterogeneous group of gene regulators that share a structurally conserved, eponymous DNA-binding domain. DNA target specificity derives from combinatorial interactions with other proteins as well as intrinsic heterogeneity among ETS domains. Emerging evidence suggests molecular hydration as a fundamental feature that defines the intrinsic heterogeneity in DNA target selection and susceptibility to epigenetic DNA modification. This perspective invokes novel hypotheses in the regulation of ETS proteins in physiologic osmotic stress, their pioneering potential in heterochromatin, and the effects of passive and pharmacologic DNA demethylation on ETS regulation.

  4. Social comparisons in adults with type 2 diabetes: Patients' reasons for target selection.

    PubMed

    Arigo, Danielle; Cornell, Max; Smyth, Joshua M

    2018-07-01

    To examine reasons for selecting a social comparison target (i.e. a specific other for relative self-evaluation), and their influence on affect and motivation for self-care, in type 2 diabetes (T2DM). Adults with T2DM (n = 180, M A1c  = 7.6%) chose to read about one of four targets. Participants rated five reasons for their choice (strongly disagree - strongly agree), and rated affect and self-care motivation before and after reading. To boost confidence in my ability to manage diabetes was rated highest overall (ps < 0.01), though choosing worse-off (vs. better-off) targets was associated with to gain useful information about how to improve (p = 0.04, [Formula: see text] = 0.05). Selection in order to feel better worked for those who chose better-off targets; choosing worse-off targets for this purpose worsened mood and stress (ps < 0.04, [Formula: see text]s = 0.02). Choosing worse-off targets to learn about similar others reduced self-care motivation (p < 0.01, [Formula: see text] = 0.05). Selection in order to boost confidence showed increased motivation only among those who chose better-off targets (p = 0.01). Patients' reasons for a particular comparison are associated with short-term changes in affect and self-care motivation, and warrant greater empirical and clinical attention.

  5. GUEST EDITORS' INTRODUCTION: Testing inversion algorithms against experimental data: inhomogeneous targets

    NASA Astrophysics Data System (ADS)

    Belkebir, Kamal; Saillard, Marc

    2005-12-01

    This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M

  6. Lepidopteran HMG-CoA reductase is a potential selective target for pest control

    PubMed Central

    Li, Yuan-mei; Huang, Juan; Tobe, Stephen S.

    2017-01-01

    As a consequence of the negative impacts on the environment of some insecticides, discovery of eco-friendly insecticides and target has received global attention in recent years. Sequence alignment and structural comparison of the rate-limiting enzyme HMG-CoA reductase (HMGR) revealed differences between lepidopteran pests and other organisms, which suggested insect HMGR could be a selective insecticide target candidate. Inhibition of JH biosynthesis in vitro confirmed that HMGR inhibitors showed a potent lethal effect on the lepidopteran pest Manduca sexta, whereas there was little effect on JH biosynthesis in Apis mellifera and Diploptera punctata. The pest control application of these inhibitors demonstrated that they can be insecticide candidates with potent ovicidal activity, larvicidal activity and insect growth regulatory effects. The present study has validated that Lepidopteran HMGR can be a potent selective insecticide target, and the HMGR inhibitors (especially type II statins) could be selective insecticide candidates and lead compounds. Furthermore, we demonstrated that sequence alignment, homology modeling and structural comparison may be useful for determining potential enzymes or receptors which can be eco-friendly pesticide  targets. PMID:28133568

  7. Lepidopteran HMG-CoA reductase is a potential selective target for pest control.

    PubMed

    Li, Yuan-Mei; Kai, Zhen-Peng; Huang, Juan; Tobe, Stephen S

    2017-01-01

    As a consequence of the negative impacts on the environment of some insecticides, discovery of eco-friendly insecticides and target has received global attention in recent years. Sequence alignment and structural comparison of the rate-limiting enzyme HMG-CoA reductase (HMGR) revealed differences between lepidopteran pests and other organisms, which suggested insect HMGR could be a selective insecticide target candidate. Inhibition of JH biosynthesis in vitro confirmed that HMGR inhibitors showed a potent lethal effect on the lepidopteran pest Manduca sexta , whereas there was little effect on JH biosynthesis in Apis mellifera and Diploptera punctata . The pest control application of these inhibitors demonstrated that they can be insecticide candidates with potent ovicidal activity, larvicidal activity and insect growth regulatory effects. The present study has validated that Lepidopteran HMGR can be a potent selective insecticide target, and the HMGR inhibitors (especially type II statins) could be selective insecticide candidates and lead compounds. Furthermore, we demonstrated that sequence alignment, homology modeling and structural comparison may be useful for determining potential enzymes or receptors which can be eco-friendly pesticide  targets.

  8. Fast internal marker tracking algorithm for onboard MV and kV imaging systems

    PubMed Central

    Mao, W.; Wiersma, R. D.; Xing, L.

    2008-01-01

    Intrafraction organ motion can limit the advantage of highly conformal dose techniques such as intensity modulated radiation therapy (IMRT) due to target position uncertainty. To ensure high accuracy in beam targeting, real-time knowledge of the target location is highly desired throughout the beam delivery process. This knowledge can be gained through imaging of internally implanted radio-opaque markers with fluoroscopic or electronic portal imaging devices (EPID). In the case of MV based images, marker detection can be problematic due to the significantly lower contrast between different materials in comparison to their kV-based counterparts. This work presents a fully automated algorithm capable of detecting implanted metallic markers in both kV and MV images with high consistency. Using prior CT information, the algorithm predefines the volumetric search space without manual region-of-interest (ROI) selection by the user. Depending on the template selected, both spherical and cylindrical markers can be detected. Multiple markers can be simultaneously tracked without indexing confusion. Phantom studies show detection success rates of 100% for both kV and MV image data. In addition, application of the algorithm to real patient image data results in successful detection of all implanted markers for MV images. Near real-time operational speeds of ∼10 frames∕sec for the detection of five markers in a 1024×768 image are accomplished using an ordinary PC workstation. PMID:18561670

  9. A weight based genetic algorithm for selecting views

    NASA Astrophysics Data System (ADS)

    Talebian, Seyed H.; Kareem, Sameem A.

    2013-03-01

    Data warehouse is a technology designed for supporting decision making. Data warehouse is made by extracting large amount of data from different operational systems; transforming it to a consistent form and loading it to the central repository. The type of queries in data warehouse environment differs from those in operational systems. In contrast to operational systems, the analytical queries that are issued in data warehouses involve summarization of large volume of data and therefore in normal circumstance take a long time to be answered. On the other hand, the result of these queries must be answered in a short time to enable managers to make decisions as short time as possible. As a result, an essential need in this environment is in improving the performances of queries. One of the most popular methods to do this task is utilizing pre-computed result of queries. In this method, whenever a new query is submitted by the user instead of calculating the query on the fly through a large underlying database, the pre-computed result or views are used to answer the queries. Although, the ideal option would be pre-computing and saving all possible views, but, in practice due to disk space constraint and overhead due to view updates it is not considered as a feasible choice. Therefore, we need to select a subset of possible views to save on disk. The problem of selecting the right subset of views is considered as an important challenge in data warehousing. In this paper we suggest a Weighted Based Genetic Algorithm (WBGA) for solving the view selection problem with two objectives.

  10. Glioma Selectivity of Magnetically Targeted Nanoparticles: A Role of Abnormal Tumor Hydrodynamics

    PubMed Central

    Chertok, Beata; David, Allan E.; Huang, Yongzhuo; Yang, Victor C.

    2007-01-01

    Magnetic targeting is a promising strategy for achieving localized drug delivery. Application of this strategy to treat brain tumors, however, is complicated by their deep intracranial location, since magnetic field density cannot be focused at a distance from an externally applied magnet. This study intended to examine whether, with magnetic targeting, pathological alteration in brain tumor flow dynamics could be of value in discriminating the diseased site from healthy brain. To address this question, the capture of magnetic nanoparticles was first assessed in vitro using a simple flow system under theoretically estimated glioma and normal brain flow conditions. Secondly, accumulation of nanoparticles via magnetic targeting was evaluated in vivo using 9L-glioma bearing rats. In vitro results that predicted a 7.6-fold increase in nanoparticle capture at glioma-versus contralateral brain-relevant flow rates were relatively consistent with the 9.6-fold glioma selectivity of nanoparticle accumulation over the contralateral brain observed in vivo. Based on these finding, the in vitro ratio of nanoparticle capture can be viewed as a plausible indicator of in vivo glioma selectivity. Overall, it can be concluded that the decreased blood flow rate in glioma, reflecting tumor vascular abnormalities, is an important contributor to glioma-selective nanoparticle accumulation with magnetic targeting. PMID:17628157

  11. Glioma selectivity of magnetically targeted nanoparticles: a role of abnormal tumor hydrodynamics.

    PubMed

    Chertok, Beata; David, Allan E; Huang, Yongzhuo; Yang, Victor C

    2007-10-08

    Magnetic targeting is a promising strategy for achieving localized drug delivery. Application of this strategy to treat brain tumors, however, is complicated by their deep intracranial location, since magnetic field density cannot be focused at a distance from an externally applied magnet. This study intended to examine whether, with magnetic targeting, pathological alteration in brain tumor flow dynamics could be of value in discriminating the diseased site from healthy brain. To address this question, the capture of magnetic nanoparticles was first assessed in vitro using a simple flow system under theoretically estimated glioma and normal brain flow conditions. Secondly, accumulation of nanoparticles via magnetic targeting was evaluated in vivo using 9L-glioma bearing rats. In vitro results that predicted a 7.6-fold increase in nanoparticle capture at glioma- versus contralateral brain-relevant flow rates were relatively consistent with the 9.6-fold glioma selectivity of nanoparticle accumulation over the contralateral brain observed in vivo. Based on these finding, the in vitro ratio of nanoparticle capture can be viewed as a plausible indicator of in vivo glioma selectivity. Overall, it can be concluded that the decreased blood flow rate in glioma, reflecting tumor vascular abnormalities, is an important contributor to glioma-selective nanoparticle accumulation with magnetic targeting.

  12. Post-launch validation of Multispectral Thermal Imager (MTI) data and algorithms

    NASA Astrophysics Data System (ADS)

    Garrett, Alfred J.; Kurzeja, Robert J.; O'Steen, B. L.; Parker, Matthew J.; Pendergast, Malcolm M.; Villa-Aleman, Eliel

    1999-10-01

    Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL) and the Savannah River Technology Center (SRTC) have developed a diverse group of algorithms for processing and analyzing the data that will be collected by the Multispectral Thermal Imager (MTI) after launch late in 1999. Each of these algorithms must be verified by comparison to independent surface and atmospheric measurements. SRTC has selected 13 sites in the continental U.S. for ground truth data collections. These sites include a high altitude cold water target (Crater Lake), cooling lakes and towers in the warm, humid southeastern U.S., Department of Energy (DOE) climate research sites, the NASA Stennis satellite Validation and Verification (V&V) target array, waste sites at the Savannah River Site, mining sites in the Four Corners area and dry lake beds in Nevada. SRTC has established mutually beneficial relationships with the organizations that manage these sites to make use of their operating and research data and to install additional instrumentation needed for MTI algorithm V&V.

  13. Evaluating Gaze-Based Interface Tools to Facilitate Point-and-Select Tasks with Small Targets

    ERIC Educational Resources Information Center

    Skovsgaard, Henrik; Mateo, Julio C.; Hansen, John Paulin

    2011-01-01

    Gaze interaction affords hands-free control of computers. Pointing to and selecting small targets using gaze alone is difficult because of the limited accuracy of gaze pointing. This is the first experimental comparison of gaze-based interface tools for small-target (e.g. less than 12 x 12 pixels) point-and-select tasks. We conducted two…

  14. On the Structure of a Best Possible Crossover Selection Strategy in Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Lässig, Jörg; Hoffmann, Karl Heinz

    The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover to find a solution with high fitness for a given optimization problem. Many different schemes have been described in the literature as possible strategies for this task but so far comparisons have been predominantly empirical. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual in the final population of the algorithm, then a best probability distribution for selecting an individual in each generation is a rectangular distribution over the individuals sorted in descending sequence by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with lower fitness, assuming that the probability distribution to choose individuals from the current population can be chosen independently for each iteration and each individual. This result is then generalized also to typical practically applied performance measures, such as maximizing the expected fitness value of the best individual seen in any generation.

  15. RoboTAP: Target priorities for robotic microlensing observations

    NASA Astrophysics Data System (ADS)

    Hundertmark, M.; Street, R. A.; Tsapras, Y.; Bachelet, E.; Dominik, M.; Horne, K.; Bozza, V.; Bramich, D. M.; Cassan, A.; D'Ago, G.; Figuera Jaimes, R.; Kains, N.; Ranc, C.; Schmidt, R. W.; Snodgrass, C.; Wambsganss, J.; Steele, I. A.; Mao, S.; Ment, K.; Menzies, J.; Li, Z.; Cross, S.; Maoz, D.; Shvartzvald, Y.

    2018-01-01

    Context. The ability to automatically select scientifically-important transient events from an alert stream of many such events, and to conduct follow-up observations in response, will become increasingly important in astronomy. With wide-angle time domain surveys pushing to fainter limiting magnitudes, the capability to follow-up on transient alerts far exceeds our follow-up telescope resources, and effective target prioritization becomes essential. The RoboNet-II microlensing program is a pathfinder project, which has developed an automated target selection process (RoboTAP) for gravitational microlensing events, which are observed in real time using the Las Cumbres Observatory telescope network. Aims: Follow-up telescopes typically have a much smaller field of view compared to surveys, therefore the most promising microlensing events must be automatically selected at any given time from an annual sample exceeding 2000 events. The main challenge is to select between events with a high planet detection sensitivity, with the aim of detecting many planets and characterizing planetary anomalies. Methods: Our target selection algorithm is a hybrid system based on estimates of the planet detection zones around a microlens. It follows automatic anomaly alerts and respects the expected survey coverage of specific events. Results: We introduce the RoboTAP algorithm, whose purpose is to select and prioritize microlensing events with high sensitivity to planetary companions. In this work, we determine the planet sensitivity of the RoboNet follow-up program and provide a working example of how a broker can be designed for a real-life transient science program conducting follow-up observations in response to alerts; we explore the issues that will confront similar programs being developed for the Large Synoptic Survey Telescope (LSST) and other time domain surveys.

  16. Shuffling cross-validation-bee algorithm as a new descriptor selection method for retention studies of pesticides in biopartitioning micellar chromatography.

    PubMed

    Zarei, Kobra; Atabati, Morteza; Ahmadi, Monire

    2017-05-04

    Bee algorithm (BA) is an optimization algorithm inspired by the natural foraging behaviour of honey bees to find the optimal solution which can be proposed to feature selection. In this paper, shuffling cross-validation-BA (CV-BA) was applied to select the best descriptors that could describe the retention factor (log k) in the biopartitioning micellar chromatography (BMC) of 79 heterogeneous pesticides. Six descriptors were obtained using BA and then the selected descriptors were applied for model development using multiple linear regression (MLR). The descriptor selection was also performed using stepwise, genetic algorithm and simulated annealing methods and MLR was applied to model development and then the results were compared with those obtained from shuffling CV-BA. The results showed that shuffling CV-BA can be applied as a powerful descriptor selection method. Support vector machine (SVM) was also applied for model development using six selected descriptors by BA. The obtained statistical results using SVM were better than those obtained using MLR, as the root mean square error (RMSE) and correlation coefficient (R) for whole data set (training and test), using shuffling CV-BA-MLR, were obtained as 0.1863 and 0.9426, respectively, while these amounts for the shuffling CV-BA-SVM method were obtained as 0.0704 and 0.9922, respectively.

  17. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    PubMed

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop

  18. Genes@Work: an efficient algorithm for pattern discovery and multivariate feature selection in gene expression data.

    PubMed

    Lepre, Jorge; Rice, J Jeremy; Tu, Yuhai; Stolovitzky, Gustavo

    2004-05-01

    Despite the growing literature devoted to finding differentially expressed genes in assays probing different tissues types, little attention has been paid to the combinatorial nature of feature selection inherent to large, high-dimensional gene expression datasets. New flexible data analysis approaches capable of searching relevant subgroups of genes and experiments are needed to understand multivariate associations of gene expression patterns with observed phenotypes. We present in detail a deterministic algorithm to discover patterns of multivariate gene associations in gene expression data. The patterns discovered are differential with respect to a control dataset. The algorithm is exhaustive and efficient, reporting all existent patterns that fit a given input parameter set while avoiding enumeration of the entire pattern space. The value of the pattern discovery approach is demonstrated by finding a set of genes that differentiate between two types of lymphoma. Moreover, these genes are found to behave consistently in an independent dataset produced in a different laboratory using different arrays, thus validating the genes selected using our algorithm. We show that the genes deemed significant in terms of their multivariate statistics will be missed using other methods. Our set of pattern discovery algorithms including a user interface is distributed as a package called Genes@Work. This package is freely available to non-commercial users and can be downloaded from our website (http://www.research.ibm.com/FunGen).

  19. Genomic selection and complex trait prediction using a fast EM algorithm applied to genome-wide markers

    PubMed Central

    2010-01-01

    Background The information provided by dense genome-wide markers using high throughput technology is of considerable potential in human disease studies and livestock breeding programs. Genome-wide association studies relate individual single nucleotide polymorphisms (SNP) from dense SNP panels to individual measurements of complex traits, with the underlying assumption being that any association is caused by linkage disequilibrium (LD) between SNP and quantitative trait loci (QTL) affecting the trait. Often SNP are in genomic regions of no trait variation. Whole genome Bayesian models are an effective way of incorporating this and other important prior information into modelling. However a full Bayesian analysis is often not feasible due to the large computational time involved. Results This article proposes an expectation-maximization (EM) algorithm called emBayesB which allows only a proportion of SNP to be in LD with QTL and incorporates prior information about the distribution of SNP effects. The posterior probability of being in LD with at least one QTL is calculated for each SNP along with estimates of the hyperparameters for the mixture prior. A simulated example of genomic selection from an international workshop is used to demonstrate the features of the EM algorithm. The accuracy of prediction is comparable to a full Bayesian analysis but the EM algorithm is considerably faster. The EM algorithm was accurate in locating QTL which explained more than 1% of the total genetic variation. A computational algorithm for very large SNP panels is described. Conclusions emBayesB is a fast and accurate EM algorithm for implementing genomic selection and predicting complex traits by mapping QTL in genome-wide dense SNP marker data. Its accuracy is similar to Bayesian methods but it takes only a fraction of the time. PMID:20969788

  20. Dynamic interactions between visual working memory and saccade target selection

    PubMed Central

    Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew

    2014-01-01

    Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628

  1. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung

    2017-01-01

    Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user’s gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods. PMID:28420114

  2. A review on quantum search algorithms

    NASA Astrophysics Data System (ADS)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  3. Near Surface Swimming of Salmonella Typhimurium Explains Target-Site Selection and Cooperative Invasion

    PubMed Central

    Kreibich, Saskia; Vonaesch, Pascale; Andritschke, Daniel; Rout, Samuel; Weidner, Kerstin; Sormaz, Milos; Songhet, Pascal; Horvath, Peter; Chabria, Mamta; Vogel, Viola; Spori, Doris M.; Jenny, Patrick; Hardt, Wolf-Dietrich

    2012-01-01

    Targeting of permissive entry sites is crucial for bacterial infection. The targeting mechanisms are incompletely understood. We have analyzed target-site selection by S. Typhimurium. This enteropathogenic bacterium employs adhesins (e.g. fim) and the type III secretion system 1 (TTSS-1) for host cell binding, the triggering of ruffles and invasion. Typically, S. Typhimurium invasion is focused on a subset of cells and multiple bacteria invade via the same ruffle. It has remained unclear how this is achieved. We have studied target-site selection in tissue culture by time lapse microscopy, movement pattern analysis and modeling. Flagellar motility (but not chemotaxis) was required for reaching the host cell surface in vitro. Subsequently, physical forces trapped the pathogen for ∼1.5–3 s in “near surface swimming”. This increased the local pathogen density and facilitated “scanning” of the host surface topology. We observed transient TTSS-1 and fim-independent “stopping” and irreversible TTSS-1-mediated docking, in particular at sites of prominent topology, i.e. the base of rounded-up cells and membrane ruffles. Our data indicate that target site selection and the cooperative infection of membrane ruffles are attributable to near surface swimming. This mechanism might be of general importance for understanding infection by flagellated bacteria. PMID:22911370

  4. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different

  5. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    PubMed Central

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and

  6. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    PubMed

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-08

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of

  7. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    PubMed Central

    Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933

  8. Selective Targeting of Antiviral and Immunomodulating Agents in the Treatment of Arenavirus Infections

    DTIC Science & Technology

    1987-10-01

    observed with free MTP-PE. In addition to our observations on peritoneal and alveolar macrophages, we also examined the effect of MTP-PE treatment on liver...Ir OIC FILE COPY C2 ILn 00 NM AD _____ N SELECTIVE TARGETING OF ANTIVIRAL AND IMMUNOMODULATING AGENTS IN THE TREATMENT OF ARENAVIRUS INFECTIONS "Kc...Selective Targeting of Antiviral and Immunomodulating Agents in the Treatment of Arenavirus Injections 12. PERSONAL AUTHOR(S) J. David Gangemi 13a. TYPE OF

  9. Infrared small target tracking based on SOPC

    NASA Astrophysics Data System (ADS)

    Hu, Taotao; Fan, Xiang; Zhang, Yu-Jin; Cheng, Zheng-dong; Zhu, Bin

    2011-01-01

    The paper presents a low cost FPGA based solution for a real-time infrared small target tracking system. A specialized architecture is presented based on a soft RISC processor capable of running kernel based mean shift tracking algorithm. Mean shift tracking algorithm is realized in NIOS II soft-core with SOPC (System on a Programmable Chip) technology. Though mean shift algorithm is widely used for target tracking, the original mean shift algorithm can not be directly used for infrared small target tracking. As infrared small target only has intensity information, so an improved mean shift algorithm is presented in this paper. How to describe target will determine whether target can be tracked by mean shift algorithm. Because color target can be tracked well by mean shift algorithm, imitating color image expression, spatial component and temporal component are advanced to describe target, which forms pseudo-color image. In order to improve the processing speed parallel technology and pipeline technology are taken. Two RAM are taken to stored images separately by ping-pong technology. A FLASH is used to store mass temp data. The experimental results show that infrared small target is tracked stably in complicated background.

  10. [siRNAs with high specificity to the target: a systematic design by CRM algorithm].

    PubMed

    Alsheddi, T; Vasin, L; Meduri, R; Randhawa, M; Glazko, G; Baranova, A

    2008-01-01

    'Off-target' silencing effect hinders the development of siRNA-based therapeutic and research applications. Common solution to this problem is an employment of the BLAST that may miss significant alignments or an exhaustive Smith-Waterman algorithm that is very time-consuming. We have developed a Comprehensive Redundancy Minimizer (CRM) approach for mapping all unique sequences ("targets") 9-to-15 nt in size within large sets of sequences (e.g. transcriptomes). CRM outputs a list of potential siRNA candidates for every transcript of the particular species. These candidates could be further analyzed by traditional "set-of-rules" types of siRNA designing tools. For human, 91% of transcripts are covered by candidate siRNAs with kernel targets of N = 15. We tested our approach on the collection of previously described experimentally assessed siRNAs and found that the correlation between efficacy and presence in CRM-approved set is significant (r = 0.215, p-value = 0.0001). An interactive database that contains a precompiled set of all human siRNA candidates with minimized redundancy is available at http://129.174.194.243. Application of the CRM-based filtering minimizes potential "off-target" silencing effects and could improve routine siRNA applications.

  11. Combining the genetic algorithm and successive projection algorithm for the selection of feature wavelengths to evaluate exudative characteristics in frozen-thawed fish muscle.

    PubMed

    Cheng, Jun-Hu; Sun, Da-Wen; Pu, Hongbin

    2016-04-15

    The potential use of feature wavelengths for predicting drip loss in grass carp fish, as affected by being frozen at -20°C for 24 h and thawed at 4°C for 1, 2, 4, and 6 days, was investigated. Hyperspectral images of frozen-thawed fish were obtained and their corresponding spectra were extracted. Least-squares support vector machine and multiple linear regression (MLR) models were established using five key wavelengths, selected by combining a genetic algorithm and successive projections algorithm, and this showed satisfactory performance in drip loss prediction. The MLR model with a determination coefficient of prediction (R(2)P) of 0.9258, and lower root mean square error estimated by a prediction (RMSEP) of 1.12%, was applied to transfer each pixel of the image and generate the distribution maps of exudation changes. The results confirmed that it is feasible to identify the feature wavelengths using variable selection methods and chemometric analysis for developing on-line multispectral imaging. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Rapid and selective updating of the target template in visual search.

    PubMed

    Sha, Li Z; Remington, Roger W; Jiang, Yuhong V

    2017-01-01

    Frequent target stimuli are detected more rapidly than infrequent ones. Here, we examined whether the frequency effect reflected durable attentional biases toward frequent target features, and whether the effect was confined to featural properties that defined the target. Participants searched for two specific target colors among distractors of heterogeneous colors and reported the line orientation of the target. The target was more often in one specific feature (e.g., a specific color or a specific orientation) than another in a training phase. This frequency difference was removed or reversed in a testing phase. Experiments 1 and 2 showed that when frequency differences were introduced to the target's defining feature, participants more rapidly found the high-frequency target than the low-frequency target. However, changes in attention were not durable-the search advantage vanished immediately when the frequency differences were removed. Experiments 3-5 showed that only featural properties that defined the target facilitated search of the more frequent feature. Features that did not define the target, such as the target feature that participants reported, sped up response but did not facilitate search. These data showed that when searching for multiple targets in a feature search task, people selectively and rapidly adapt to the frequency in the target's defining feature.

  13. A uniform energy consumption algorithm for wireless sensor and actuator networks based on dynamic polling point selection.

    PubMed

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2013-12-19

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation.

  14. Selection, Prioritization, and Characteristics of Kepler Target Stars

    DTIC Science & Technology

    2010-04-20

    contributions from zodiacal emission as well as background stars): r = F∗ F∗ + Fbg . (5) The photometric aperture is defined as the set of pixels that... The Astrophysical Journal Letters, 713:L109–L114, 2010 April 20 doi:10.1088/2041-8205/713/2/L109 C© 2010. The American Astronomical Society. All...rights reserved. Printed in the U.S.A. SELECTION, PRIORITIZATION, AND CHARACTERISTICS OF KEPLER TARGET STARS Natalie M. Batalha1, William J. Borucki2

  15. Predictive distractor context facilitates attentional selection of high, but not intermediate and low, salience targets.

    PubMed

    Töllner, Thomas; Conci, Markus; Müller, Hermann J

    2015-03-01

    It is well established that we can focally attend to a specific region in visual space without shifting our eyes, so as to extract action-relevant sensory information from covertly attended locations. The underlying mechanisms that determine how fast we engage our attentional spotlight in visual-search scenarios, however, remain controversial. One dominant view advocated by perceptual decision-making models holds that the times taken for focal-attentional selection are mediated by an internal template that biases perceptual coding and selection decisions exclusively through target-defining feature coding. This notion directly predicts that search times remain unaffected whether or not participants can anticipate the upcoming distractor context. Here we tested this hypothesis by employing an illusory-figure localization task that required participants to search for an invariant target amongst a variable distractor context, which gradually changed--either randomly or predictably--as a function of distractor-target similarity. We observed a graded decrease in internal focal-attentional selection times--correlated with external behavioral latencies--for distractor contexts of higher relative to lower similarity to the target. Critically, for low but not intermediate and high distractor-target similarity, these context-driven effects were cortically and behaviorally amplified when participants could reliably predict the type of distractors. This interactive pattern demonstrates that search guidance signals can integrate information about distractor, in addition to target, identities to optimize distractor-target competition for focal-attentional selection. © 2014 Wiley Periodicals, Inc.

  16. Identification of new antibacterial targets in RNA polymerase of Mycobacterium tuberculosis by detecting positive selection sites.

    PubMed

    Wang, QingBiao; Xu, Yiqin; Gu, Zhuoya; Liu, Nian; Jin, Ke; Li, Yao; Crabbe, M James C; Zhong, Yang

    2018-04-01

    Bacterial RNA polymerase (RNAP) is an effective target for antibacterial treatment. In order to search new potential targets in RNAP of Mycobacterium, we detected adaptive selections of RNAP related genes in 13 strains of Mycobacterium by phylogenetic analysis. We first collected sequences of 17 genes including rpoA, rpoB, rpoC, rpoZ, and sigma factor A-M. Then maximum likelihood trees were constructed, followed by positive selection detection. We found that sigG shows positive selection along the clade (M. tuberculosis, M. bovis), suggesting its important evolutionary role and its potential to be a new antibacterial target. Moreover, the regions near 933Cys and 935His on the rpoB subunit of M. tuberculosis showed significant positive selection, which could also be a new attractive target for anti-tuberculosis drugs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Phage display selection of peptides that target calcium-binding proteins.

    PubMed

    Vetter, Stefan W

    2013-01-01

    Phage display allows to rapidly identify peptide sequences with binding affinity towards target proteins, for example, calcium-binding proteins (CBPs). Phage technology allows screening of 10(9) or more independent peptide sequences and can identify CBP binding peptides within 2 weeks. Adjusting of screening conditions allows selecting CBPs binding peptides that are either calcium-dependent or independent. Obtained peptide sequences can be used to identify CBP target proteins based on sequence homology or to quickly obtain peptide-based CBP inhibitors to modulate CBP-target interactions. The protocol described here uses a commercially available phage display library, in which random 12-mer peptides are displayed on filamentous M13 phages. The library was screened against the calcium-binding protein S100B.

  18. Auditory Stream Segregation Improves Infants' Selective Attention to Target Tones Amid Distracters

    ERIC Educational Resources Information Center

    Smith, Nicholas A.; Trainor, Laurel J.

    2011-01-01

    This study examined the role of auditory stream segregation in the selective attention to target tones in infancy. Using a task adapted from Bregman and Rudnicky's 1975 study and implemented in a conditioned head-turn procedure, infant and adult listeners had to discriminate the temporal order of 2,200 and 2,400 Hz target tones presented alone,…

  19. Occurrence Prospect of HDR and Target Site Selection Study in Southeastern of China

    NASA Astrophysics Data System (ADS)

    Lin, W.; Gan, H.

    2017-12-01

    Hot dry rock (HDR) geothermal resource is one of the most important clean energy in future. Site selection a HDR resource is a fundamental work to explore the HDR resources. This paper compiled all the HDR development projects domestic and abroad, and summarized the location of HDR geothermal geological index. After comparing the geological background of HDR in the southeast coastal area of China, Yangjiang Xinzhou in Guangdong province, Leizhou Peninsula area, Lingshui in Hainan province and Huangshadong in Guangzhou were selected from some key potential target area along the southeast coast of China. Deep geothermal field model of the study area is established based on the comprehensive analysis of the target area of deep geothermal geological background and deep thermal anomalies. This paper also compared the hot dry rock resources target locations, and proposed suggestions for the priority exploration target area and exploration scheme.

  20. Competition between color and luminance for target selection in smooth pursuit and saccadic eye movements.

    PubMed

    Spering, Miriam; Montagnini, Anna; Gegenfurtner, Karl R

    2008-11-24

    Visual processing of color and luminance for smooth pursuit and saccadic eye movements was investigated using a target selection paradigm. In two experiments, stimuli were varied along the dimensions color and luminance, and selection of the more salient target was compared in pursuit and saccades. Initial pursuit was biased in the direction of the luminance component whereas saccades showed a relative preference for color. An early pursuit response toward luminance was often reversed to color by a later saccade. Observers' perceptual judgments of stimulus salience, obtained in two control experiments, were clearly biased toward luminance. This choice bias in perceptual data implies that the initial short-latency pursuit response agrees with perceptual judgments. In contrast, saccades, which have a longer latency than pursuit, do not seem to follow the perceptual judgment of salience but instead show a stronger relative preference for color. These substantial differences in target selection imply that target selection processes for pursuit and saccadic eye movements use distinctly different weights for color and luminance stimuli.

  1. An algorithm for monitoring the traffic on a less-travelled road using multi-modal sensor suite

    NASA Astrophysics Data System (ADS)

    Damarla, Thyagaraju; Chatters, Gary; Liss, Brian; Vu, Hao; Sabatier, James M.

    2014-06-01

    We conducted an experiment to correlate the information gathered by a suite of hard sensors with the information on social networks such as Twitter, Facebook, etc. The experiment consisting of monitoring traffic on a well- traveled road and on a road inside a facility. The sensors suite selected mainly consists of sensors that require low power for operation and last a longtime. The output of each sensor is analyzed to classify the targets as ground vehicles, humans, and airborne targets. The algorithm is also used to count the number of targets belonging to each type so the sensor can store the information for anomaly detection. In this paper, we describe the classifier algorithms used for acoustic, seismic, and passive infrared (PIR) sensor data.

  2. Lorcaserin and pimavanserin: emerging selectivity of serotonin receptor subtype–targeted drugs

    PubMed Central

    Meltzer, Herbert Y.; Roth, Bryan L.

    2013-01-01

    Serotonin (5-hydroxytryptamine, or 5-HT) receptors mediate a plethora of physiological phenomena in the brain and the periphery. Additionally, serotonergic dysfunction has been implicated in nearly every neuropsychiatric disorder. The effects of serotonin are mediated by fourteen GPCRs. Both the therapeutic actions and side effects of commonly prescribed drugs are frequently due to nonspecific actions on various 5-HT receptor subtypes. For more than 20 years, the search for clinically efficacious drugs that selectively target 5-HT receptor subtypes has been only occasionally successful. This review provides an overview of 5-HT receptor pharmacology and discusses two recent 5-HT receptor subtype–selective drugs, lorcaserin and pimavanserin, which target the 5HT2C and 5HT2A receptors and provide new treatments for obesity and Parkinson’s disease psychosis, respectively. PMID:24292660

  3. Lorcaserin and pimavanserin: emerging selectivity of serotonin receptor subtype-targeted drugs.

    PubMed

    Meltzer, Herbert Y; Roth, Bryan L

    2013-12-01

    Serotonin (5-hydroxytryptamine, or 5-HT) receptors mediate a plethora of physiological phenomena in the brain and the periphery. Additionally, serotonergic dysfunction has been implicated in nearly every neuropsychiatric disorder. The effects of serotonin are mediated by fourteen GPCRs. Both the therapeutic actions and side effects of commonly prescribed drugs are frequently due to nonspecific actions on various 5-HT receptor subtypes. For more than 20 years, the search for clinically efficacious drugs that selectively target 5-HT receptor subtypes has been only occasionally successful. This review provides an overview of 5-HT receptor pharmacology and discusses two recent 5-HT receptor subtype-selective drugs, lorcaserin and pimavanserin, which target the 5HT2C and 5HT2A receptors and provide new treatments for obesity and Parkinson's disease psychosis, respectively.

  4. Feature-based RNN target recognition

    NASA Astrophysics Data System (ADS)

    Bakircioglu, Hakan; Gelenbe, Erol

    1998-09-01

    Detection and recognition of target signatures in sensory data obtained by synthetic aperture radar (SAR), forward- looking infrared, or laser radar, have received considerable attention in the literature. In this paper, we propose a feature based target classification methodology to detect and classify targets in cluttered SAR images, that makes use of selective signature data from sensory data, together with a neural network technique which uses a set of trained networks based on the Random Neural Network (RNN) model (Gelenbe 89, 90, 91, 93) which is trained to act as a matched filter. We propose and investigate radial features of target shapes that are invariant to rotation, translation, and scale, to characterize target and clutter signatures. These features are then used to train a set of learning RNNs which can be used to detect targets within clutter with high accuracy, and to classify the targets or man-made objects from natural clutter. Experimental data from SAR imagery is used to illustrate and validate the proposed method, and to calculate Receiver Operating Characteristics which illustrate the performance of the proposed algorithm.

  5. Target tracking and 3D trajectory acquisition of cabbage butterfly (P. rapae) based on the KCF-BS algorithm.

    PubMed

    Guo, Yang-Yang; He, Dong-Jian; Liu, Cong

    2018-06-25

    Insect behaviour is an important research topic in plant protection. To study insect behaviour accurately, it is necessary to observe and record their flight trajectory quantitatively and precisely in three dimensions (3D). The goal of this research was to analyse frames extracted from videos using Kernelized Correlation Filters (KCF) and Background Subtraction (BS) (KCF-BS) to plot the 3D trajectory of cabbage butterfly (P. rapae). Considering the experimental environment with a wind tunnel, a quadrature binocular vision insect video capture system was designed and applied in this study. The KCF-BS algorithm was used to track the butterfly in video frames and obtain coordinates of the target centroid in two videos. Finally the 3D trajectory was calculated according to the matching relationship in the corresponding frames of two angles in the video. To verify the validity of the KCF-BS algorithm, Compressive Tracking (CT) and Spatio-Temporal Context Learning (STC) algorithms were performed. The results revealed that the KCF-BS tracking algorithm performed more favourably than CT and STC in terms of accuracy and robustness.

  6. Improvements to the ShipIR/NTCS adaptive track gate algorithm and 3D flare particle model

    NASA Astrophysics Data System (ADS)

    Ramaswamy, Srinivasan; Vaitekunas, David A.; Gunter, Willem H.; February, Faith J.

    2017-05-01

    A key component in any image-based tracking system is the adaptive tracking algorithm used to segment the image into potential targets, rank-and-select the best candidate target, and gate the selected target to further improve tracker performance. Similarly, a key component in any soft-kill response to an incoming guided missile is the flare/chaff decoy used to distract or seduce the seeker homing system away from the naval platform. This paper describes the recent improvements to the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR). Efforts to analyse and match the 3D flare particle model against actual IR measurements of the Chemring TALOS IR round resulted in further refinement of the 3D flare particle distribution. The changes in the flare model characteristics were significant enough to require an overhaul to the adaptive track gate (ATG) algorithm in the way it detects the presence of flare decoys and reacquires the target after flare separation. A series of test scenarios are used to demonstrate the impact of the new flare and ATG on IR tactics simulation.

  7. Automatic Tool Selection in V-bending Processes by Using an Intelligent Collision Detection Algorithm

    NASA Astrophysics Data System (ADS)

    Salem, A. A.

    2017-09-01

    V-bending is widely used to produce the sheet metal components. There are global Changes in the shape of the sheet metal component during progressive bending processes. Accordingly, collisions may be occurred between part and tool during bending. Collision-free is considered one of the feasibility conditions of V-bending process planning which the tool selection is verified by the absence of the collisions. This paper proposes an intelligent collision detection algorithm which has the ability to distinguish between 2D bent parts and the other bent parts. Due to this ability, 2D and 3D collision detection subroutines have been developed in the proposed algorithm. This division of algorithm’s subroutines could reduce the computational operations during collisions detecting.

  8. Autonomous Rover Traverse and Precise Arm Placement on Remotely Designated Targets

    NASA Technical Reports Server (NTRS)

    Nesnas, Issa A.; Pivtoraiko, Mihail N.; Kelly, Alonzo; Fleder, Michael

    2012-01-01

    This software controls a rover platform to traverse rocky terrain autonomously, plan paths, and avoid obstacles using its stereo hazard and navigation cameras. It does so while continuously tracking a target of interest selected from 10 20 m away. The rover drives and tracks the target until it reaches the vicinity of the target. The rover then positions itself to approach the target, deploys its robotic arm, and places the end effector instrument on the designated target to within 2-3-cm accuracy of the originally selected target. This software features continuous navigation in a fairly rocky field in an outdoor environment and the ability to enable the rover to avoid large rocks and traverse over smaller ones. Using point-and-click mouse commands, a scientist designates targets in the initial imagery acquired from the rover s mast cameras. The navigation software uses stereo imaging, traversability analysis, path planning, trajectory generation, and trajectory execution. It also includes visual target tracking of a designated target selected from 10 m away while continuously navigating the rocky terrain. Improvements in this design include steering while driving, which uses continuous curvature paths. There are also several improvements to the traversability analyzer, including improved data fusion of traversability maps that result from pose estimation uncertainties, dealing with boundary effects to enable tighter maneuvers, and handling a wider range of obstacles. This work advances what has been previously developed and integrated on the Mars Exploration Rovers by using algorithms that are capable of traversing more rock-dense terrains, enabling tight, thread-the-needle maneuvers. These algorithms were integrated on the newly refurbished Athena Mars research rover, and were fielded in the JPL Mars Yard. Forty-three runs were conducted with targets at distances ranging from 5 to 15 m, and a success rate of 93% was achieved for placement of the instrument within

  9. A multiplex primer design algorithm for target amplification of continuous genomic regions.

    PubMed

    Ozturk, Ahmet Rasit; Can, Tolga

    2017-06-19

    Targeted Next Generation Sequencing (NGS) assays are cost-efficient and reliable alternatives to Sanger sequencing. For sequencing of very large set of genes, the target enrichment approach is suitable. However, for smaller genomic regions, the target amplification method is more efficient than both the target enrichment method and Sanger sequencing. The major difficulty of the target amplification method is the preparation of amplicons, regarding required time, equipment, and labor. Multiplex PCR (MPCR) is a good solution for the mentioned problems. We propose a novel method to design MPCR primers for a continuous genomic region, following the best practices of clinically reliable PCR design processes. On an experimental setup with 48 different combinations of factors, we have shown that multiple parameters might effect finding the first feasible solution. Increasing the length of the initial primer candidate selection sequence gives better results whereas waiting for a longer time to find the first feasible solution does not have a significant impact. We generated MPCR primer designs for the HBB whole gene, MEFV coding regions, and human exons between 2000 bp to 2100 bp-long. Our benchmarking experiments show that the proposed MPCR approach is able produce reliable NGS assay primers for a given sequence in a reasonable amount of time.

  10. An algorithm for targeting finite burn maneuvers

    NASA Technical Reports Server (NTRS)

    Barbieri, R. W.; Wyatt, G. H.

    1972-01-01

    An algorithm was developed to solve the following problem: given the characteristics of the engine to be used to make a finite burn maneuver and given the desired orbit, when must the engine be ignited and what must be the orientation of the thrust vector so as to obtain the desired orbit? The desired orbit is characterized by classical elements and functions of these elements whereas the control parameters are characterized by the time to initiate the maneuver and three direction cosines which locate the thrust vector. The algorithm was built with a Monte Carlo capability whereby samples are taken from the distribution of errors associated with the estimate of the state and from the distribution of errors associated with the engine to be used to make the maneuver.

  11. Algorithmic Coordination in Robotic Networks

    DTIC Science & Technology

    2010-11-29

    appropriate performance, robustness and scalability properties for various task allocation , surveillance, and information gathering applications is...networking, we envision designing and analyzing algorithms with appropriate performance, robustness and scalability properties for various task ...distributed algorithms for target assignments; based on the classic auction algorithms in static networks, we intend to design efficient algorithms in worst

  12. In vitro Selection and Interaction Studies of a DNA Aptamer Targeting Protein A

    PubMed Central

    Stoltenburg, Regina; Schubert, Thomas; Strehlitz, Beate

    2015-01-01

    A new DNA aptamer targeting Protein A is presented. The aptamer was selected by use of the FluMag-SELEX procedure. The SELEX technology (Systematic Evolution of Ligands by EXponential enrichment) is widely applied as an in vitro selection and amplification method to generate target-specific aptamers and exists in various modified variants. FluMag-SELEX is one of them and is characterized by the use of magnetic beads for target immobilization and fluorescently labeled oligonucleotides for monitoring the aptamer selection progress. Structural investigations and sequence truncation experiments of the selected aptamer for Protein A led to the conclusion, that a stem-loop structure at its 5’-end including the 5’-primer binding site is essential for aptamer-target binding. Extensive interaction analyses between aptamer and Protein A were performed by methods like surface plasmon resonance, MicroScale Thermophoresis and bead-based binding assays using fluorescence measurements. The binding of the aptamer to its target was thus investigated in assays with immobilization of one of the binding partners each, and with both binding partners in solution. Affinity constants were determined in the low micromolar to submicromolar range, increasing to the nanomolar range under the assumption of avidity. Protein A provides more than one binding site for the aptamer, which may overlap with the known binding sites for immunoglobulins. The aptamer binds specifically to both native and recombinant Protein A, but not to other immunoglobulin-binding proteins like Protein G and L. Cross specificity to other proteins was not found. The application of the aptamer is directed to Protein A detection or affinity purification. Moreover, whole cells of Staphylococcus aureus, presenting Protein A on the cell surface, could also be bound by the aptamer. PMID:26221730

  13. In vitro Selection and Interaction Studies of a DNA Aptamer Targeting Protein A.

    PubMed

    Stoltenburg, Regina; Schubert, Thomas; Strehlitz, Beate

    2015-01-01

    A new DNA aptamer targeting Protein A is presented. The aptamer was selected by use of the FluMag-SELEX procedure. The SELEX technology (Systematic Evolution of Ligands by EXponential enrichment) is widely applied as an in vitro selection and amplification method to generate target-specific aptamers and exists in various modified variants. FluMag-SELEX is one of them and is characterized by the use of magnetic beads for target immobilization and fluorescently labeled oligonucleotides for monitoring the aptamer selection progress. Structural investigations and sequence truncation experiments of the selected aptamer for Protein A led to the conclusion, that a stem-loop structure at its 5'-end including the 5'-primer binding site is essential for aptamer-target binding. Extensive interaction analyses between aptamer and Protein A were performed by methods like surface plasmon resonance, MicroScale Thermophoresis and bead-based binding assays using fluorescence measurements. The binding of the aptamer to its target was thus investigated in assays with immobilization of one of the binding partners each, and with both binding partners in solution. Affinity constants were determined in the low micromolar to submicromolar range, increasing to the nanomolar range under the assumption of avidity. Protein A provides more than one binding site for the aptamer, which may overlap with the known binding sites for immunoglobulins. The aptamer binds specifically to both native and recombinant Protein A, but not to other immunoglobulin-binding proteins like Protein G and L. Cross specificity to other proteins was not found. The application of the aptamer is directed to Protein A detection or affinity purification. Moreover, whole cells of Staphylococcus aureus, presenting Protein A on the cell surface, could also be bound by the aptamer.

  14. ROBIN: a platform for evaluating automatic target recognition algorithms: I. Overview of the project and presentation of the SAGEM DS competition

    NASA Astrophysics Data System (ADS)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set

  15. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

  16. SU-F-J-115: Target Volume and Artifact Evaluation of a New Device-Less 4D CT Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, R; Pan, T

    2016-06-15

    Purpose: 4DCT is often used in radiation therapy treatment planning to define the extent of motion of the visible tumor (IGTV). Recent available software allows 4DCT images to be created without the use of an external motion surrogate. This study aims to compare this device-less algorithm to a standard device-driven technique (RPM) in regards to artifacts and the creation of treatment volumes. Methods: 34 lung cancer patients who had previously received a cine 4DCT scan on a GE scanner with an RPM determined respiratory signal were selected. Cine images were sorted into 10 phases based on both the RPM signalmore » and the device-less algorithm. Contours were created on standard and device-less maximum intensity projection (MIP) images using a region growing algorithm and manual adjustment to remove other structures. Variations in measurements due to intra-observer differences in contouring were assessed by repeating a subset of 6 patients 2 additional times. Artifacts in each phase image were assessed using normalized cross correlation at each bed position transition. A score between +1 (artifacts “better” in all phases for device-less) and −1 (RPM similarly better) was assigned for each patient based on these results. Results: Device-less IGTV contours were 2.1 ± 1.0% smaller than standard IGTV contours (not significant, p = 0.15). The Dice similarity coefficient (DSC) was 0.950 ± 0.006 indicating good similarity between the contours. Intra-observer variation resulted in standard deviations of 1.2 percentage points in percent volume difference and 0.005 in DSC measurements. Only two patients had improved artifacts with RPM, and the average artifact score (0.40) was significantly greater than zero. Conclusion: Device-less 4DCT can be used in place of the standard method for target definition due to no observed difference between standard and device-less IGTVs. Phase image artifacts were significantly reduced with the device-less method.« less

  17. Hybridization between multi-objective genetic algorithm and support vector machine for feature selection in walker-assisted gait.

    PubMed

    Martins, Maria; Costa, Lino; Frizera, Anselmo; Ceres, Ramón; Santos, Cristina

    2014-03-01

    Walker devices are often prescribed incorrectly to patients, leading to the increase of dissatisfaction and occurrence of several problems, such as, discomfort and pain. Thus, it is necessary to objectively evaluate the effects that assisted gait can have on the gait patterns of walker users, comparatively to a non-assisted gait. A gait analysis, focusing on spatiotemporal and kinematics parameters, will be issued for this purpose. However, gait analysis yields redundant information that often is difficult to interpret. This study addresses the problem of selecting the most relevant gait features required to differentiate between assisted and non-assisted gait. For that purpose, it is presented an efficient approach that combines evolutionary techniques, based on genetic algorithms, and support vector machine algorithms, to discriminate differences between assisted and non-assisted gait with a walker with forearm supports. For comparison purposes, other classification algorithms are verified. Results with healthy subjects show that the main differences are characterized by balance and joints excursion in the sagittal plane. These results, confirmed by clinical evidence, allow concluding that this technique is an efficient feature selection approach. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Developing a NIR multispectral imaging for prediction and visualization of peanut protein content using variable selection algorithms

    NASA Astrophysics Data System (ADS)

    Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei

    2018-01-01

    The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.

  19. A Uniform Energy Consumption Algorithm for Wireless Sensor and Actuator Networks Based on Dynamic Polling Point Selection

    PubMed Central

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2014-01-01

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation. PMID:24451455

  20. Extreme selective sweeps independently targeted the X chromosomes of the great apes

    PubMed Central

    Nam, Kiwoong; Munch, Kasper; Hobolth, Asger; Dutheil, Julien Yann; Veeramah, Krishna R.; Woerner, August E.; Hammer, Michael F.; Mailund, Thomas; Schierup, Mikkel Heide

    2015-01-01

    The unique inheritance pattern of the X chromosome exposes it to natural selection in a way that is different from that of the autosomes, potentially resulting in accelerated evolution. We perform a comparative analysis of X chromosome polymorphism in 10 great ape species, including humans. In most species, we identify striking megabase-wide regions, where nucleotide diversity is less than 20% of the chromosomal average. Such regions are found exclusively on the X chromosome. The regions overlap partially among species, suggesting that the underlying targets are partly shared among species. The regions have higher proportions of singleton SNPs, higher levels of population differentiation, and a higher nonsynonymous-to-synonymous substitution ratio than the rest of the X chromosome. We show that the extent to which diversity is reduced is incompatible with direct selection or the action of background selection and soft selective sweeps alone, and therefore, we suggest that very strong selective sweeps have independently targeted these specific regions in several species. The only genomic feature that we can identify as strongly associated with loss of diversity is the location of testis-expressed ampliconic genes, which also have reduced diversity around them. We hypothesize that these genes may be responsible for selective sweeps in the form of meiotic drive caused by an intragenomic conflict in male meiosis. PMID:25941379

  1. Selected algorithms for measurement data processing in impulse-radar-based system for monitoring of human movements

    NASA Astrophysics Data System (ADS)

    Miękina, Andrzej; Wagner, Jakub; Mazurek, Paweł; Morawski, Roman Z.

    2016-11-01

    The importance of research on new technologies that could be employed in care services for elderly and disabled persons is highlighted. Advantages of impulse-radar sensors, when applied for non-intrusive monitoring of such persons in their home environment, are indicated. Selected algorithms for the measurement data preprocessing - viz. the algorithms for clutter suppression and echo parameter estimation, as well as for estimation of the twodimensional position of a monitored person - are proposed. The capability of an impulse-radar- based system to provide some application-specific parameters, viz. the parameters characterising the patient's health condition, is also demonstrated.

  2. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  3. Quantum partial search for uneven distribution of multiple target items

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Korepin, Vladimir

    2018-06-01

    Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.

  4. Mitochondrial Targeted Coenzyme Q, Superoxide, and Fuel Selectivity in Endothelial Cells

    PubMed Central

    Fink, Brian D.; O'Malley, Yunxia; Dake, Brian L.; Ross, Nicolette C.; Prisinzano, Thomas E.; Sivitz, William I.

    2009-01-01

    Background Previously, we reported that the “antioxidant” compound “mitoQ” (mitochondrial-targeted ubiquinol/ubiquinone) actually increased superoxide production by bovine aortic endothelial (BAE) cell mitochondria incubated with complex I but not complex II substrates. Methods and Results To further define the site of action of the targeted coenzyme Q compound, we extended these studies to include different substrate and inhibitor conditions. In addition, we assessed the effects of mitoquinone on mitochondrial respiration, measured respiration and mitochondrial membrane potential in intact cells, and tested the intriguing hypothesis that mitoquinone might impart fuel selectivity in intact BAE cells. In mitochondria respiring on differing concentrations of complex I substrates, mitoquinone and rotenone had interactive effects on ROS consistent with redox cycling at multiple sites within complex I. Mitoquinone increased respiration in isolated mitochondria respiring on complex I but not complex II substrates. Mitoquinone also increased oxygen consumption by intact BAE cells. Moreover, when added to intact cells at 50 to 1000 nM, mitoquinone increased glucose oxidation and reduced fat oxidation, at doses that did not alter membrane potential or induce cell toxicity. Although high dose mitoquinone reduced mitochondrial membrane potential, the positively charged mitochondrial-targeted cation, decyltriphenylphosphonium (mitoquinone without the coenzyme Q moiety), decreased membrane potential more than mitoquinone, but did not alter fuel selectivity. Therefore, non-specific effects of the positive charge were not responsible and the quinone moiety is required for altered nutrient selectivity. Conclusions In summary, the interactive effects of mitoquinone and rotenone are consistent with redox cycling at more than one site within complex I. In addition, mitoquinone has substrate dependent effects on mitochondrial respiration, increases repiration by intact cells

  5. Mitochondrial targeted coenzyme Q, superoxide, and fuel selectivity in endothelial cells.

    PubMed

    Fink, Brian D; O'Malley, Yunxia; Dake, Brian L; Ross, Nicolette C; Prisinzano, Thomas E; Sivitz, William I

    2009-01-01

    Previously, we reported that the "antioxidant" compound "mitoQ" (mitochondrial-targeted ubiquinol/ubiquinone) actually increased superoxide production by bovine aortic endothelial (BAE) cell mitochondria incubated with complex I but not complex II substrates. To further define the site of action of the targeted coenzyme Q compound, we extended these studies to include different substrate and inhibitor conditions. In addition, we assessed the effects of mitoquinone on mitochondrial respiration, measured respiration and mitochondrial membrane potential in intact cells, and tested the intriguing hypothesis that mitoquinone might impart fuel selectivity in intact BAE cells. In mitochondria respiring on differing concentrations of complex I substrates, mitoquinone and rotenone had interactive effects on ROS consistent with redox cycling at multiple sites within complex I. Mitoquinone increased respiration in isolated mitochondria respiring on complex I but not complex II substrates. Mitoquinone also increased oxygen consumption by intact BAE cells. Moreover, when added to intact cells at 50 to 1000 nM, mitoquinone increased glucose oxidation and reduced fat oxidation, at doses that did not alter membrane potential or induce cell toxicity. Although high dose mitoquinone reduced mitochondrial membrane potential, the positively charged mitochondrial-targeted cation, decyltriphenylphosphonium (mitoquinone without the coenzyme Q moiety), decreased membrane potential more than mitoquinone, but did not alter fuel selectivity. Therefore, non-specific effects of the positive charge were not responsible and the quinone moiety is required for altered nutrient selectivity. In summary, the interactive effects of mitoquinone and rotenone are consistent with redox cycling at more than one site within complex I. In addition, mitoquinone has substrate dependent effects on mitochondrial respiration, increases repiration by intact cells, and alters fuel selectivity favoring glucose over

  6. Design and Evaluation of Perceptual-based Object Group Selection Techniques

    NASA Astrophysics Data System (ADS)

    Dehmeshki, Hoda

    Selecting groups of objects is a frequent task in graphical user interfaces. It is required prior to many standard operations such as deletion, movement, or modification. Conventional selection techniques are lasso, rectangle selection, and the selection and de-selection of items through the use of modifier keys. These techniques may become time-consuming and error-prone when target objects are densely distributed or when the distances between target objects are large. Perceptual-based selection techniques can considerably improve selection tasks when targets have a perceptual structure, for example when arranged along a line. Current methods to detect such groups use ad hoc grouping algorithms that are not based on results from perception science. Moreover, these techniques do not allow selecting groups with arbitrary arrangements or permit modifying a selection. This dissertation presents two domain-independent perceptual-based systems that address these issues. Based on established group detection models from perception research, the proposed systems detect perceptual groups formed by the Gestalt principles of good continuation and proximity. The new systems provide gesture-based or click-based interaction techniques for selecting groups with curvilinear or arbitrary structures as well as clusters. Moreover, the gesture-based system is adapted for the graph domain to facilitate path selection. This dissertation includes several user studies that show the proposed systems outperform conventional selection techniques when targets form salient perceptual groups and are still competitive when targets are semi-structured.

  7. Development of a two-stage gene selection method that incorporates a novel hybrid approach using the cuckoo optimization algorithm and harmony search for cancer classification.

    PubMed

    Elyasigomari, V; Lee, D A; Screen, H R C; Shaheed, M H

    2017-03-01

    For each cancer type, only a few genes are informative. Due to the so-called 'curse of dimensionality' problem, the gene selection task remains a challenge. To overcome this problem, we propose a two-stage gene selection method called MRMR-COA-HS. In the first stage, the minimum redundancy and maximum relevance (MRMR) feature selection is used to select a subset of relevant genes. The selected genes are then fed into a wrapper setup that combines a new algorithm, COA-HS, using the support vector machine as a classifier. The method was applied to four microarray datasets, and the performance was assessed by the leave one out cross-validation method. Comparative performance assessment of the proposed method with other evolutionary algorithms suggested that the proposed algorithm significantly outperforms other methods in selecting a fewer number of genes while maintaining the highest classification accuracy. The functions of the selected genes were further investigated, and it was confirmed that the selected genes are biologically relevant to each cancer type. Copyright © 2017. Published by Elsevier Inc.

  8. Treatment algorithm in 2014 for advanced non-small cell lung cancer: therapy selection by tumour histology and molecular biology.

    PubMed

    Manegold, Christian

    2014-09-01

    The availability of antineoplastic monoclonal antibodies, small molecules and newer cytotoxics such as pemetrexed, the EGFR-tyrosine kinase inhibitors erlotinib, gefitinib, afatinib as well as the anti-angiogenic bevacizumab and the ALK-inhibitor crizotinib has recently changes the treatment algorithm of advanced non-small cell lung cancer. Decision making in 2014 is characterized by customizing therapy, by selecting a specific therapeutic regimen based on the histotype and the genotype of the tumour. This refers to first-line induction therapy and maintenance therapy as well, but also to subsequent lines of therapy since anti-neoplastic drugs and regimens used upfront clinically influence the selection of agents/regimes considered for second-/third-line treatment. Consequently, therapy customization through tumour histology and molecular markers has significantly influenced the work of pathologists around the globe and the process of obtaining an extended therapeutically relevant tumour diagnosis. Not only histological sub-typing became standard but molecular information is also considered of increasing importance for treatment selection. Routine molecular testing in certified laboratories must be established, and the diagnostic process should ideally be performed under the guidance of evidence based recommendation. The process of investigating and implementing medical targeting in lung cancer therefore, requires advanced diagnostic techniques and expertise and because of its large dimension is costly and influenced by the limitation of financial and clinical resources. Copyright © 2014. Published by Elsevier Urban & Partner Sp. z o.o.

  9. Software for Partly Automated Recognition of Targets

    NASA Technical Reports Server (NTRS)

    Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark; Selinsky, T.

    2002-01-01

    The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user's tendencies while the user is selecting targets and to increase the user's productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.

  10. Software for Partly Automated Recognition of Targets

    NASA Technical Reports Server (NTRS)

    Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark

    2003-01-01

    The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user s tendencies while the user is selecting targets and to increase the user s productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.

  11. A novel track-before-detect algorithm based on optimal nonlinear filtering for detecting and tracking infrared dim target

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu

    2015-08-01

    Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.

  12. Construction and applications of exon-trapping gene-targeting vectors with a novel strategy for negative selection.

    PubMed

    Saito, Shinta; Ura, Kiyoe; Kodama, Miho; Adachi, Noritaka

    2015-06-30

    Targeted gene modification by homologous recombination provides a powerful tool for studying gene function in cells and animals. In higher eukaryotes, non-homologous integration of targeting vectors occurs several orders of magnitude more frequently than does targeted integration, making the gene-targeting technology highly inefficient. For this reason, negative-selection strategies have been employed to reduce the number of drug-resistant clones associated with non-homologous vector integration, particularly when artificial nucleases to introduce a DNA break at the target site are unavailable or undesirable. As such, an exon-trap strategy using a promoterless drug-resistance marker gene provides an effective way to counterselect non-homologous integrants. However, constructing exon-trapping targeting vectors has been a time-consuming and complicated process. By virtue of highly efficient att-mediated recombination, we successfully developed a simple and rapid method to construct plasmid-based vectors that allow for exon-trapping gene targeting. These exon-trap vectors were useful in obtaining correctly targeted clones in mouse embryonic stem cells and human HT1080 cells. Most importantly, with the use of a conditionally cytotoxic gene, we further developed a novel strategy for negative selection, thereby enhancing the efficiency of counterselection for non-homologous integration of exon-trap vectors. Our methods will greatly facilitate exon-trapping gene-targeting technologies in mammalian cells, particularly when combined with the novel negative selection strategy.

  13. Genetic algorithm in the structural design of Cooke triplet lenses

    NASA Astrophysics Data System (ADS)

    Hazra, Lakshminarayan; Banerjee, Saswatee

    1999-08-01

    This paper is in tune with our efforts to develop a systematic method for multicomponent lens design. Our aim is to find a suitable starting point in the final configuration space, so that popular local search methods like damped least squares (DLS) may directly lead to a useful solution. For 'ab initio' design problems, a thin lens layout specifying the powers of the individual components and the intercomponent separations are worked out analytically. Requirements of central aberration targets for the individual components in order to satisfy the prespecified primary aberration targets for the overall system are then determined by nonlinear optimization. The next step involves structural design of the individual components by optimization techniques. This general method may be adapted for the design of triplets and their derivatives. However, for the thin lens design of a Cooke triplet composed of three airspaced singlets, the two steps of optimization mentioned above may be combined into a single optimization procedure. The optimum configuration for each of the single set, catering to the required Gaussian specification and primary aberration targets for the Cooke triplet, are determined by an application of genetic algorithm (GA). Our implementation of this algorithm is based on simulations of some complex tools of natural evolution, like selection, crossover and mutation. Our version of GA may or may not converge to a unique optimum, depending on some of the algorithm specific parameter values. With our algorithm, practically useful solutions are always available, although convergence to a global optimum can not be guaranteed. This is perfectly in keeping with our need to allow 'floating' of aberration targets in the subproblem level. Some numerical results dealing with our preliminary investigations on this problem are presented.

  14. Long-Term Memories Bias Sensitivity and Target Selection in Complex Scenes

    PubMed Central

    Patai, Eva Zita; Doallo, Sonia; Nobre, Anna Christina

    2014-01-01

    In everyday situations we often rely on our memories to find what we are looking for in our cluttered environment. Recently, we developed a new experimental paradigm to investigate how long-term memory (LTM) can guide attention, and showed how the pre-exposure to a complex scene in which a target location had been learned facilitated the detection of the transient appearance of the target at the remembered location (Summerfield, Lepsien, Gitelman, Mesulam, & Nobre, 2006; Summerfield, Rao, Garside, & Nobre, 2011). The present study extends these findings by investigating whether and how LTM can enhance perceptual sensitivity to identify targets occurring within their complex scene context. Behavioral measures showed superior perceptual sensitivity (d′) for targets located in remembered spatial contexts. We used the N2pc event-related potential to test whether LTM modulated the process of selecting the target from its scene context. Surprisingly, in contrast to effects of visual spatial cues or implicit contextual cueing, LTM for target locations significantly attenuated the N2pc potential. We propose that the mechanism by which these explicitly available LTMs facilitate perceptual identification of targets may differ from mechanisms triggered by other types of top-down sources of information. PMID:23016670

  15. Optimal Wavelengths Selection Using Hierarchical Evolutionary Algorithm for Prediction of Firmness and Soluble Solids Content in Apples

    USDA-ARS?s Scientific Manuscript database

    Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...

  16. Stimulus selection and tracking during urination: autoshaping directed behavior with toilet targets.

    PubMed Central

    Siegel, R K

    1977-01-01

    A simple procedure is described for investigating stimuli selected as targets during urination in the commode. Ten normal males preferred a floating target that could be tracked to a series of stationary targets. This technique was used to bring misdirected urinations in a severely retarded male under rapid stimulus control of a floating target in the commode. The float stimulus was also evaluated with nine institionalized, moderately retarded males and results indicated rapid autoshaping of directed urination without the use of verbal instructions or conventional toilet training. The technique can be applied in training children to control misdirected urinations in institution for the retarded, in psychiatric wards with regressed populations, and in certain male school dormitories. PMID:885828

  17. Stimulus selection and tracking during urination: autoshaping directed behavior with toilet targets.

    PubMed

    Siegel, R K

    1977-01-01

    A simple procedure is described for investigating stimuli selected as targets during urination in the commode. Ten normal males preferred a floating target that could be tracked to a series of stationary targets. This technique was used to bring misdirected urinations in a severely retarded male under rapid stimulus control of a floating target in the commode. The float stimulus was also evaluated with nine institionalized, moderately retarded males and results indicated rapid autoshaping of directed urination without the use of verbal instructions or conventional toilet training. The technique can be applied in training children to control misdirected urinations in institution for the retarded, in psychiatric wards with regressed populations, and in certain male school dormitories.

  18. Artificial neural networks for acoustic target recognition

    NASA Astrophysics Data System (ADS)

    Robertson, James A.; Mossing, John C.; Weber, Bruce A.

    1995-04-01

    Acoustic sensors can be used to detect, track and identify non-line-of-sight targets passively. Attempts to alter acoustic emissions often result in an undesirable performance degradation. This research project investigates the use of neural networks for differentiating between features extracted from the acoustic signatures of sources. Acoustic data were filtered and digitized using a commercially available analog-digital convertor. The digital data was transformed to the frequency domain for additional processing using the FFT. Narrowband peak detection algorithms were incorporated to select peaks above a user defined SNR. These peaks were then used to generate a set of robust features which relate specifically to target components in varying background conditions. The features were then used as input into a backpropagation neural network. A K-means unsupervised clustering algorithm was used to determine the natural clustering of the observations. Comparisons between a feature set consisting of the normalized amplitudes of the first 250 frequency bins of the power spectrum and a set of 11 harmonically related features were made. Initial results indicate that even though some different target types had a tendency to group in the same clusters, the neural network was able to differentiate the targets. Successful identification of acoustic sources under varying operational conditions with high confidence levels was achieved.

  19. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  20. Thermodynamic properties of solvated peptides from selective integrated tempering sampling with a new weighting factor estimation algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Lin; Xie, Liangxu; Yang, Mingjun

    2017-04-01

    Conformational sampling under rugged energy landscape is always a challenge in computer simulations. The recently developed integrated tempering sampling, together with its selective variant (SITS), emerges to be a powerful tool in exploring the free energy landscape or functional motions of various systems. The estimation of weighting factors constitutes a critical step in these methods and requires accurate calculation of partition function ratio between different thermodynamic states. In this work, we propose a new adaptive update algorithm to compute the weighting factors based on the weighted histogram analysis method (WHAM). The adaptive-WHAM algorithm with SITS is then applied to study the thermodynamic properties of several representative peptide systems solvated in an explicit water box. The performance of the new algorithm is validated in simulations of these solvated peptide systems. We anticipate more applications of this coupled optimisation and production algorithm to other complicated systems such as the biochemical reactions in solution.

  1. Comparison of Naive Bayes and Decision Tree on Feature Selection Using Genetic Algorithm for Classification Problem

    NASA Astrophysics Data System (ADS)

    Rahmadani, S.; Dongoran, A.; Zarlis, M.; Zakarias

    2018-03-01

    This paper discusses the problem of feature selection using genetic algorithms on a dataset for classification problems. The classification model used is the decicion tree (DT), and Naive Bayes. In this paper we will discuss how the Naive Bayes and Decision Tree models to overcome the classification problem in the dataset, where the dataset feature is selectively selected using GA. Then both models compared their performance, whether there is an increase in accuracy or not. From the results obtained shows an increase in accuracy if the feature selection using GA. The proposed model is referred to as GADT (GA-Decision Tree) and GANB (GA-Naive Bayes). The data sets tested in this paper are taken from the UCI Machine Learning repository.

  2. Phosphatidylserine-targeted liposome for enhanced glioma-selective imaging.

    PubMed

    Zhang, Liang; Habib, Amyn A; Zhao, Dawen

    2016-06-21

    Phosphatidylserine (PS), which is normally intracellular, becomes exposed on the outer surface of viable endothelial cells (ECs) of tumor vasculature. Utilizing a PS-targeting antibody, we have recently established a PS-targeted liposomal (PS-L) nanoplatform that has demonstrated to be highly tumor-selective. Because of the vascular lumen-exposed PS that is immediately accessible without a need to penetrate the intact blood brain barrier (BBB), we hypothesize that the systemically administered PS-L binds specifically to tumor vascular ECs, becomes subsequently internalized into the cells and then enables its cargos to be efficiently delivered to glioma parenchyma. To test this, we exploited the dual MRI/optical imaging contrast agents-loaded PS-L and injected it intravenously into mice bearing intracranial U87 glioma. At 24 h, both in vivo optical imaging and MRI depicted enhanced tumor contrast, distinct from the surrounding normal brain. Intriguingly, longitudinal MRI revealed temporal and spatial intratumoral distribution of the PS-L by following MRI contrast changes, which appeared punctate in tumor periphery at an earlier time point (4 h), but became clustering and disseminated throughout the tumor at 24 h post injection. Importantly, glioma-targeting specificity of the PS-L was antigen specific, since a control probe of irrelevant specificity showed minimal accumulation in the glioma. Together, these results indicate that the PS-L nanoplatform enables the enhanced, glioma-targeted delivery of imaging contrast agents by crossing the tumor BBB efficiently, which may also serve as a useful nanoplatform for anti-glioma drugs.

  3. Aerial surveillance based on hierarchical object classification for ground target detection

    NASA Astrophysics Data System (ADS)

    Vázquez-Cervantes, Alberto; García-Huerta, Juan-Manuel; Hernández-Díaz, Teresa; Soto-Cajiga, J. A.; Jiménez-Hernández, Hugo

    2015-03-01

    Unmanned aerial vehicles have turned important in surveillance application due to the flexibility and ability to inspect and displace in different regions of interest. The instrumentation and autonomy of these vehicles have been increased; i.e. the camera sensor is now integrated. Mounted cameras allow flexibility to monitor several regions of interest, displacing and changing the camera view. A well common task performed by this kind of vehicles correspond to object localization and tracking. This work presents a hierarchical novel algorithm to detect and locate objects. The algorithm is based on a detection-by-example approach; this is, the target evidence is provided at the beginning of the vehicle's route. Afterwards, the vehicle inspects the scenario, detecting all similar objects through UTM-GPS coordinate references. Detection process consists on a sampling information process of the target object. Sampling process encode in a hierarchical tree with different sampling's densities. Coding space correspond to a huge binary space dimension. Properties such as independence and associative operators are defined in this space to construct a relation between the target object and a set of selected features. Different densities of sampling are used to discriminate from general to particular features that correspond to the target. The hierarchy is used as a way to adapt the complexity of the algorithm due to optimized battery duty cycle of the aerial device. Finally, this approach is tested in several outdoors scenarios, proving that the hierarchical algorithm works efficiently under several conditions.

  4. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  5. Improved genome-scale multi-target virtual screening via a novel collaborative filtering approach to cold-start problem

    PubMed Central

    Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar

    2016-01-01

    Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design. PMID:27958331

  6. Improved genome-scale multi-target virtual screening via a novel collaborative filtering approach to cold-start problem.

    PubMed

    Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar

    2016-12-13

    Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design.

  7. Advancing the sensitivity of selected reaction monitoring-based targeted quantitative proteomics

    PubMed Central

    Shi, Tujin; Su, Dian; Liu, Tao; Tang, Keqi; Camp, David G.; Qian, Wei-Jun; Smith, Richard D.

    2012-01-01

    Selected reaction monitoring (SRM)—also known as multiple reaction monitoring (MRM)—has emerged as a promising high-throughput targeted protein quantification technology for candidate biomarker verification and systems biology applications. A major bottleneck for current SRM technology, however, is insufficient sensitivity for e.g., detecting low-abundance biomarkers likely present at the low ng/mL to pg/mL range in human blood plasma or serum, or extremely low-abundance signaling proteins in cells or tissues. Herein we review recent advances in methods and technologies, including front-end immunoaffinity depletion, fractionation, selective enrichment of target proteins/peptides including posttranslational modifications (PTMs), as well as advances in MS instrumentation which have significantly enhanced the overall sensitivity of SRM assays and enabled the detection of low-abundance proteins at low to sub- ng/mL level in human blood plasma or serum. General perspectives on the potential of achieving sufficient sensitivity for detection of pg/mL level proteins in plasma are also discussed. PMID:22577010

  8. Impact of high-risk conjunctions on Active Debris Removal target selection

    NASA Astrophysics Data System (ADS)

    Lidtke, Aleksander A.; Lewis, Hugh G.; Armellin, Roberto

    2015-10-01

    Space debris simulations show that if current space launches continue unchanged, spacecraft operations might become difficult in the congested space environment. It has been suggested that Active Debris Removal (ADR) might be necessary in order to prevent such a situation. Selection of objects to be targeted by ADR is considered important because removal of non-relevant objects will unnecessarily increase the cost of ADR. One of the factors to be used in this ADR target selection is the collision probability accumulated by every object. This paper shows the impact of high-probability conjunctions on the collision probability accumulated by individual objects as well as the probability of any collision occurring in orbit. Such conjunctions cannot be predicted far in advance and, consequently, not all the objects that will be involved in such dangerous conjunctions can be removed through ADR. Therefore, a debris remediation method that would address such events at short notice, and thus help prevent likely collisions, is suggested.

  9. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-01

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime. PMID:28106837

  10. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks.

    PubMed

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-19

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes' being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network's best service quality and lifetime.

  11. Out with the old? The role of selective attention in retaining targets in partial report.

    PubMed

    Lindsey, Dakota R B; Bundesen, Claus; Kyllingsbæk, Søren; Petersen, Anders; Logan, Gordon D

    2017-01-01

    In the partial-report task, subjects are asked to report only a portion of the items presented. Selective attention chooses which objects to represent in short-term memory (STM) on the basis of their relevance. Because STM is limited in capacity, one must sometimes choose which objects are removed from memory in light of new relevant information. We tested the hypothesis that the choices among newly presented information and old information in STM involve the same process-that both are acts of selective attention. We tested this hypothesis using a two-display partial-report procedure. In this procedure, subjects had to select and retain relevant letters (targets) from two sequentially presented displays. If selection in perception and retention in STM are the same process, then irrelevant letters (distractors) in the second display, which demanded attention because of their similarity to the targets, should have decreased target report from the first display. This effect was not obtained in any of four experiments. Thus, choosing objects to keep in STM is not the same process as choosing new objects to bring into STM.

  12. Target Search & Selection for the DI/EPOXI Spacecraft

    NASA Technical Reports Server (NTRS)

    Grebow, Daniel J.; Bhaskaran, Shyam; Chesley, Steven R.

    2012-01-01

    Upon completion of the Hartley 2 flyby in November 2010, the Deep Impact (DI) spacecraft resided in a solar orbit without possibility for gravity assist with any large body. Conservative estimates of remaining fuel were enough to provide only an 18 m/s impulse on the spacecraft. We present our method and results of our systematic scan of potential small body encounters for DI, and our criteria to narrow the selection to the asteroid 2002 GT as the target flyby body. The mission profile has two deterministic maneuvers to achieve the encounter, the first of which executed on November 25, 2011.

  13. Margin selection to compensate for loss of target dose coverage due to target motion during external‐beam radiation therapy of the lung

    PubMed Central

    Osei, Ernest; Barnett, Rob

    2015-01-01

    The aim of this study is to provide guidelines for the selection of external‐beam radiation therapy target margins to compensate for target motion in the lung during treatment planning. A convolution model was employed to predict the effect of target motion on the delivered dose distribution. The accuracy of the model was confirmed with radiochromic film measurements in both static and dynamic phantom modes. 502 unique patient breathing traces were recorded and used to simulate the effect of target motion on a dose distribution. A 1D probability density function (PDF) representing the position of the target throughout the breathing cycle was generated from each breathing trace obtained during 4D CT. Changes in the target D95 (the minimum dose received by 95% of the treatment target) due to target motion were analyzed and shown to correlate with the standard deviation of the PDF. Furthermore, the amount of target D95 recovered per millimeter of increased field width was also shown to correlate with the standard deviation of the PDF. The sensitivity of changes in dose coverage with respect to target size was also determined. Margin selection recommendations that can be used to compensate for loss of target D95 were generated based on the simulation results. These results are discussed in the context of clinical plans. We conclude that, for PDF standard deviations less than 0.4 cm with target sizes greater than 5 cm, little or no additional margins are required. Targets which are smaller than 5 cm with PDF standard deviations larger than 0.4 cm are most susceptible to loss of coverage. The largest additional required margin in this study was determined to be 8 mm. PACS numbers: 87.53.Bn, 87.53.Kn, 87.55.D‐, 87.55.Gh

  14. Transient analysis mode participation for modal survey target mode selection using MSC/NASTRAN DMAP

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Ibrahim, Omar M.; Sullivan, Timothy L.; Goodnight, Thomas W.

    1994-01-01

    Many methods have been developed to aid analysts in identifying component modes which contribute significantly to component responses. These modes, typically targeted for dynamic model correlation via a modal survey, are known as target modes. Most methods used to identify target modes are based on component global dynamic behavior. It is sometimes unclear if these methods identify all modes contributing to responses important to the analyst. These responses are usually those in areas of hardware design concerns. One method used to check the completeness of target mode sets and identify modes contributing significantly to important component responses is mode participation. With this method, the participation of component modes in dynamic responses is quantified. Those modes which have high participation are likely modal survey target modes. Mode participation is most beneficial when it is used with responses from analyses simulating actual flight events. For spacecraft, these responses are generated via a structural dynamic coupled loads analysis. Using MSC/NASTRAN DMAP, a method has been developed for calculating mode participation based on transient coupled loads analysis results. The algorithm has been implemented to be compatible with an existing coupled loads methodology and has been used successfully to develop a set of modal survey target modes.

  15. Distributed estimation for adaptive sensor selection in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Mahmoud, Magdi S.; Hassan Hamid, Matasm M.

    2014-05-01

    Wireless sensor networks (WSNs) are usually deployed for monitoring systems with the distributed detection and estimation of sensors. Sensor selection in WSNs is considered for target tracking. A distributed estimation scenario is considered based on the extended information filter. A cost function using the geometrical dilution of precision measure is derived for active sensor selection. A consensus-based estimation method is proposed in this paper for heterogeneous WSNs with two types of sensors. The convergence properties of the proposed estimators are analyzed under time-varying inputs. Accordingly, a new adaptive sensor selection (ASS) algorithm is presented in which the number of active sensors is adaptively determined based on the absolute local innovations vector. Simulation results show that the tracking accuracy of the ASS is comparable to that of the other algorithms.

  16. Oncotripsy: Targeting cancer cells selectively via resonant harmonic excitation

    NASA Astrophysics Data System (ADS)

    Heyden, S.; Ortiz, M.

    2016-07-01

    We investigate a method of selectively targeting cancer cells by means of ultrasound harmonic excitation at their resonance frequency, which we refer to as oncotripsy. The geometric model of the cells takes into account the cytoplasm, nucleus and nucleolus, as well as the plasma membrane and nuclear envelope. Material properties are varied within a pathophysiologically-relevant range. A first modal analysis reveals the existence of a spectral gap between the natural frequencies and, most importantly, resonant growth rates of healthy and cancerous cells. The results of the modal analysis are verified by simulating the fully-nonlinear transient response of healthy and cancerous cells at resonance. The fully nonlinear analysis confirms that cancerous cells can be selectively taken to lysis by the application of carefully tuned ultrasound harmonic excitation while simultaneously leaving healthy cells intact.

  17. Radiology Reports With Hyperlinks Improve Target Lesion Selection and Measurement Concordance in Cancer Trials.

    PubMed

    Machado, Laura B; Apolo, Andrea B; Steinberg, Seth M; Folio, Les R

    2017-02-01

    Radiology reports often lack the measurements of target lesions that are needed for oncology clinical trials. When available, the measurements in the radiology reports often do not match those in the records used to calculate therapeutic response. This study assessed the clinical value of hyperlinked tumor measurements in multimedia-enhanced radiology reports in the PACS and the inclusion of a radiologist assistant in the process of assessing tumor burden. We assessed 489 target lesions in 232 CT examinations of 71 patients with metastatic genitourinary cancer enrolled in two therapeutic trials. We analyzed target lesion selection and measurement concordance between oncology records (used to calculate therapeutic response) and two types of radiology reports in the PACS: multimedia-enhanced radiology reports and text-only reports. For statistical tests, we used the Wilcoxon signed rank, Wilcoxon rank sum test, and Fisher method to combine p values from the paired and unpaired results. The Fisher exact test was used to compare overall measurement concordance. Concordance on target lesion selection was greater for multimedia-enhanced radiology reports (78%) than the text-only reports (52%) (p = 0.0050). There was also improved overall measurement concordance with the multimedia-enhanced radiology reports (68%) compared with the text-only reports (38%) (p < 0.0001). Compared with text-only reports, hyperlinked multimedia-enhanced radiology reports improved concordance of target lesion selection and measurement with the measurements used to calculate therapeutic response.

  18. Initial basalt target site selection evaluation for the Mars penetrator drop test

    NASA Technical Reports Server (NTRS)

    Bunch, T. E.; Quaide, W. L.; Polkowski, G.

    1976-01-01

    Potential basalt target sites for an air drop penetrator test were described and the criteria involved in site selection were discussed. A summary of the background field geology and recommendations for optimum sites are also presented.

  19. A Comparison of Two Skip Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Rea, Jeremy R.; Putnam, Zachary R.

    2007-01-01

    The Orion capsule vehicle will have a Lift-to-Drag ratio (L/D) of 0.3-0.35. For an Apollo-like direct entry into the Earth's atmosphere from a lunar return trajectory, this L/D will give the vehicle a maximum range of about 2500 nm and a maximum crossrange of 216 nm. In order to y longer ranges, the vehicle lift must be used to loft the trajectory such that the aerodynamic forces are decreased. A Skip-Trajectory results if the vehicle leaves the sensible atmosphere and a second entry occurs downrange of the atmospheric exit point. The Orion capsule is required to have landing site access (either on land or in water) inside the Continental United States (CONUS) for lunar returns anytime during the lunar month. This requirement means the vehicle must be capable of flying ranges of at least 5500 nm. For the L/D of the vehicle, this is only possible with the use of a guided Skip-Trajectory. A skip entry guidance algorithm is necessary to achieve this requirement. Two skip entry guidance algorithms have been developed: the Numerical Skip Entry Guidance (NSEG) algorithm was developed at NASA/JSC and PredGuid was developed at Draper Laboratory. A comparison of these two algorithms will be presented in this paper. Each algorithm has been implemented in a high-fidelity, 6 degree-of-freedom simulation called the Advanced NASA Technology Architecture for Exploration Studies (ANTARES). NASA and Draper engineers have completed several monte carlo analyses in order to compare the performance of each algorithm in various stress states. Each algorithm has been tested for entry-to-target ranges to include direct entries and skip entries of varying length. Dispersions have been included on the initial entry interface state, vehicle mass properties, vehicle aerodynamics, atmosphere, and Reaction Control System (RCS). Performance criteria include miss distance to the target, RCS fuel usage, maximum g-loads and heat rates for the first and second entry, total heat load, and control

  20. Particle Filtering with Region-based Matching for Tracking of Partially Occluded and Scaled Targets*

    PubMed Central

    Nakhmani, Arie; Tannenbaum, Allen

    2012-01-01

    Visual tracking of arbitrary targets in clutter is important for a wide range of military and civilian applications. We propose a general framework for the tracking of scaled and partially occluded targets, which do not necessarily have prominent features. The algorithm proposed in the present paper utilizes a modified normalized cross-correlation as the likelihood for a particle filter. The algorithm divides the template, selected by the user in the first video frame, into numerous patches. The matching process of these patches by particle filtering allows one to handle the target’s occlusions and scaling. Experimental results with fixed rectangular templates show that the method is reliable for videos with nonstationary, noisy, and cluttered background, and provides accurate trajectories in cases of target translation, scaling, and occlusion. PMID:22506088

  1. Biased normalized cuts for target detection in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Xuewen; Dorado-Munoz, Leidy P.; Messinger, David W.; Cahill, Nathan D.

    2016-05-01

    The Biased Normalized Cuts (BNC) algorithm is a useful technique for detecting targets or objects in RGB imagery. In this paper, we propose modifying BNC for the purpose of target detection in hyperspectral imagery. As opposed to other target detection algorithms that typically encode target information prior to dimensionality reduction, our proposed algorithm encodes target information after dimensionality reduction, enabling a user to detect different targets in interactive mode. To assess the proposed BNC algorithm, we utilize hyperspectral imagery (HSI) from the SHARE 2012 data campaign, and we explore the relationship between the number and the position of expert-provided target labels and the precision/recall of the remaining targets in the scene.

  2. Novel Chemokine-Based Immunotoxins for Potent and Selective Targeting of Cytomegalovirus Infected Cells

    PubMed Central

    Spiess, Katja; Jeppesen, Mads G.; Malmgaard-Clausen, Mikkel; Krzywkowski, Karen

    2017-01-01

    Immunotoxins as antiviral therapeutics are largely unexplored but have promising prospective due to their high selectivity potential and their unparalleled efficiency. One recent example targeted the virus-encoded G protein-coupled receptor US28 as a strategy for specific and efficient treatment of human cytomegalovirus (HCMV) infections. US28 is expressed on virus-infected cells and scavenge chemokines by rapid internalization. The chemokine-based fusion-toxin protein (FTP) consisted of a variant (F49A) of CX3CL1 specifically targeting US28 linked to the catalytic domain of Pseudomonas exotoxin A (PE). Here, we systematically seek to improve F49A-FTP by modifications in its three structural domains; we generated variants with (1) altered chemokine sequence (K14A, F49L, and F49E), (2) shortened and elongated linker region, and (3) modified toxin domain. Only F49L-FTP displayed higher selectivity in its binding to US28 versus CX3CR1, the endogenous receptor for CX3CL1, but this was not matched by a more selective killing of US28-expressing cells. A longer linker and different toxin variants decreased US28 affinity and selective killing. Thereby, F49A-FTP represents the best candidate for HCMV treatment. Many viruses encode internalizing receptors suggesting that not only HCMV but also, for instance, Epstein-Barr virus and Kaposi's sarcoma-associated herpesvirus may be targeted by FTPs. PMID:28251165

  3. Novel Chemokine-Based Immunotoxins for Potent and Selective Targeting of Cytomegalovirus Infected Cells.

    PubMed

    Spiess, Katja; Jeppesen, Mads G; Malmgaard-Clausen, Mikkel; Krzywkowski, Karen; Kledal, Thomas N; Rosenkilde, Mette M

    2017-01-01

    Immunotoxins as antiviral therapeutics are largely unexplored but have promising prospective due to their high selectivity potential and their unparalleled efficiency. One recent example targeted the virus-encoded G protein-coupled receptor US28 as a strategy for specific and efficient treatment of human cytomegalovirus (HCMV) infections. US28 is expressed on virus-infected cells and scavenge chemokines by rapid internalization. The chemokine-based fusion-toxin protein (FTP) consisted of a variant (F49A) of CX 3 CL1 specifically targeting US28 linked to the catalytic domain of Pseudomonas exotoxin A (PE). Here, we systematically seek to improve F49A-FTP by modifications in its three structural domains; we generated variants with (1) altered chemokine sequence (K14A, F49L, and F49E), (2) shortened and elongated linker region, and (3) modified toxin domain. Only F49L-FTP displayed higher selectivity in its binding to US28 versus CX 3 CR1, the endogenous receptor for CX 3 CL1, but this was not matched by a more selective killing of US28-expressing cells. A longer linker and different toxin variants decreased US28 affinity and selective killing. Thereby, F49A-FTP represents the best candidate for HCMV treatment. Many viruses encode internalizing receptors suggesting that not only HCMV but also, for instance, Epstein-Barr virus and Kaposi's sarcoma-associated herpesvirus may be targeted by FTPs.

  4. An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing

    NASA Astrophysics Data System (ADS)

    Zhao, Yunji; Pei, Hailong

    In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.

  5. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks.

    PubMed

    Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-12-21

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.

  6. Use of an Improved Matching Algorithm to Select Scaffolds for Enzyme Design Based on a Complex Active Site Model.

    PubMed

    Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan

    2016-01-01

    Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic

  7. Use of an Improved Matching Algorithm to Select Scaffolds for Enzyme Design Based on a Complex Active Site Model

    PubMed Central

    Huang, Xiaoqiang; Xue, Jing; Lin, Min; Zhu, Yushan

    2016-01-01

    Active site preorganization helps native enzymes electrostatically stabilize the transition state better than the ground state for their primary substrates and achieve significant rate enhancement. In this report, we hypothesize that a complex active site model for active site preorganization modeling should help to create preorganized active site design and afford higher starting activities towards target reactions. Our matching algorithm ProdaMatch was improved by invoking effective pruning strategies and the native active sites for ten scaffolds in a benchmark test set were reproduced. The root-mean squared deviations between the matched transition states and those in the crystal structures were < 1.0 Å for the ten scaffolds, and the repacking calculation results showed that 91% of the hydrogen bonds within the active sites are recovered, indicating that the active sites can be preorganized based on the predicted positions of transition states. The application of the complex active site model for de novo enzyme design was evaluated by scaffold selection using a classic catalytic triad motif for the hydrolysis of p-nitrophenyl acetate. Eighty scaffolds were identified from a scaffold library with 1,491 proteins and four scaffolds were native esterase. Furthermore, enzyme design for complicated substrates was investigated for the hydrolysis of cephalexin using scaffold selection based on two different catalytic motifs. Only three scaffolds were identified from the scaffold library by virtue of the classic catalytic triad-based motif. In contrast, 40 scaffolds were identified using a more flexible, but still preorganized catalytic motif, where one scaffold corresponded to the α-amino acid ester hydrolase that catalyzes the hydrolysis and synthesis of cephalexin. Thus, the complex active site modeling approach for de novo enzyme design with the aid of the improved ProdaMatch program is a promising approach for the creation of active sites with high catalytic

  8. Evaluation of the selection methods used in the exIWO algorithm based on the optimization of multidimensional functions

    NASA Astrophysics Data System (ADS)

    Kostrzewa, Daniel; Josiński, Henryk

    2016-06-01

    The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.

  9. High-resolution inverse synthetic aperture radar imaging for large rotation angle targets based on segmented processing algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Zhang, Xinggan; Bai, Yechao; Tang, Lan

    2017-01-01

    In inverse synthetic aperture radar (ISAR) imaging, the migration through resolution cells (MTRCs) will occur when the rotation angle of the moving target is large, thereby degrading image resolution. To solve this problem, an ISAR imaging method based on segmented preprocessing is proposed. In this method, the echoes of large rotating target are divided into several small segments, and every segment can generate a low-resolution image without MTRCs. Then, each low-resolution image is rotated back to the original position. After image registration and phase compensation, a high-resolution image can be obtained. Simulation and real experiments show that the proposed algorithm can deal with the radar system with different range and cross-range resolutions and significantly compensate the MTRCs.

  10. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    NASA Astrophysics Data System (ADS)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  11. Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network.

    PubMed

    Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin

    2017-06-14

    The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.

  12. Target objects defined by a conjunction of colour and shape can be selected independently and in parallel.

    PubMed

    Jenkins, Michael; Grubert, Anna; Eimer, Martin

    2017-11-01

    It is generally assumed that during search for targets defined by a feature conjunction, attention is allocated sequentially to individual objects. We tested this hypothesis by tracking the time course of attentional processing biases with the N2pc component in tasks where observers searched for two targets defined by a colour/shape conjunction. In Experiment 1, two displays presented in rapid succession (100 ms or 10 ms SOA) each contained a target and a colour-matching or shape-matching distractor on opposite sides. Target objects in both displays elicited N2pc components of similar size that overlapped in time when the SOA was 10 ms, suggesting that attention was allocated in parallel to both targets. Analogous results were found in Experiment 2, where targets and partially matching distractors were both accompanied by an object without target-matching features. Colour-matching and shape-matching distractors also elicited N2pc components, and the target N2pc was initially identical to the sum of the two distractor N2pcs, suggesting that the initial phase of attentional object selection was guided independently by feature templates for target colour and shape. Beyond 230 ms after display onset, the target N2pc became superadditive, indicating that attentional selection processes now started to be sensitive to the presence of feature conjunctions. Results show that independent attentional selection processes can be activated in parallel by two target objects in situations where these objects are defined by a feature conjunction.

  13. Model of ballistic targets' dynamics used for trajectory tracking algorithms

    NASA Astrophysics Data System (ADS)

    Okoń-FÄ fara, Marta; Kawalec, Adam; Witczak, Andrzej

    2017-04-01

    There are known only few ballistic object tracking algorithms. To develop such algorithms and to its further testing, it is necessary to implement possibly simple and reliable objects' dynamics model. The article presents the dynamics' model of a tactical ballistic missile (TBM) including the three stages of flight: the boost stage and two passive stages - the ascending one and the descending one. Additionally, the procedure of transformation from the local coordinate system to the polar-radar oriented and the global is presented. The prepared theoretical data may be used to determine the tracking algorithm parameters and to its further verification.

  14. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples

    PubMed Central

    Conroy-Beam, Daniel; Buss, David M.

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection. PMID:27276030

  15. How Are Mate Preferences Linked with Actual Mate Selection? Tests of Mate Preference Integration Algorithms Using Computer Simulations and Actual Mating Couples.

    PubMed

    Conroy-Beam, Daniel; Buss, David M

    2016-01-01

    Prior mate preference research has focused on the content of mate preferences. Yet in real life, people must select mates among potentials who vary along myriad dimensions. How do people incorporate information on many different mate preferences in order to choose which partner to pursue? Here, in Study 1, we compare seven candidate algorithms for integrating multiple mate preferences in a competitive agent-based model of human mate choice evolution. This model shows that a Euclidean algorithm is the most evolvable solution to the problem of selecting fitness-beneficial mates. Next, across three studies of actual couples (Study 2: n = 214; Study 3: n = 259; Study 4: n = 294) we apply the Euclidean algorithm toward predicting mate preference fulfillment overall and preference fulfillment as a function of mate value. Consistent with the hypothesis that mate preferences are integrated according to a Euclidean algorithm, we find that actual mates lie close in multidimensional preference space to the preferences of their partners. Moreover, this Euclidean preference fulfillment is greater for people who are higher in mate value, highlighting theoretically-predictable individual differences in who gets what they want. These new Euclidean tools have important implications for understanding real-world dynamics of mate selection.

  16. Target recognitions in multiple-camera closed-circuit television using color constancy

    NASA Astrophysics Data System (ADS)

    Soori, Umair; Yuen, Peter; Han, Ji Wen; Ibrahim, Izzati; Chen, Wentao; Hong, Kan; Merfort, Christian; James, David; Richardson, Mark

    2013-04-01

    People tracking in crowded scenes from closed-circuit television (CCTV) footage has been a popular and challenging task in computer vision. Due to the limited spatial resolution in the CCTV footage, the color of people's dress may offer an alternative feature for their recognition and tracking. However, there are many factors, such as variable illumination conditions, viewing angles, and camera calibration, that may induce illusive modification of intrinsic color signatures of the target. Our objective is to recognize and track targets in multiple camera views using color as the detection feature, and to understand if a color constancy (CC) approach may help to reduce these color illusions due to illumination and camera artifacts and thereby improve target recognition performance. We have tested a number of CC algorithms using various color descriptors to assess the efficiency of target recognition from a real multicamera Imagery Library for Intelligent Detection Systems (i-LIDS) data set. Various classifiers have been used for target detection, and the figure of merit to assess the efficiency of target recognition is achieved through the area under the receiver operating characteristics (AUROC). We have proposed two modifications of luminance-based CC algorithms: one with a color transfer mechanism and the other using a pixel-wise sigmoid function for an adaptive dynamic range compression, a method termed enhanced luminance reflectance CC (ELRCC). We found that both algorithms improve the efficiency of target recognitions substantially better than that of the raw data without CC treatment, and in some cases the ELRCC improves target tracking by over 100% within the AUROC assessment metric. The performance of the ELRCC has been assessed over 10 selected targets from three different camera views of the i-LIDS footage, and the averaged target recognition efficiency over all these targets is found to be improved by about 54% in AUROC after the data are processed by

  17. Assessment, Target Selection, and Intervention: Dynamic Interactions within a Systemic Perspective

    ERIC Educational Resources Information Center

    Williams, A. Lynn

    2005-01-01

    There are a number of clinical options available for speech-language pathologists to choose from to analyze a child's phonological system, select treatment targets, and design intervention. Frequently, each of these areas of clinical options is viewed independently of one another or approached within an eclectic framework. In this article, an…

  18. VARSEDIG: an algorithm for morphometric characters selection and statistical validation in morphological taxonomy.

    PubMed

    Guisande, Cástor; Vari, Richard P; Heine, Jürgen; García-Roselló, Emilio; González-Dacosta, Jacinto; Perez-Schofield, Baltasar J García; González-Vilas, Luis; Pelayo-Villamil, Patricia

    2016-09-12

    We present and discuss VARSEDIG, an algorithm which identifies the morphometric features that significantly discriminate two taxa and validates the morphological distinctness between them via a Monte-Carlo test. VARSEDIG is freely available as a function of the RWizard application PlotsR (http://www.ipez.es/RWizard) and as R package on CRAN. The variables selected by VARSEDIG with the overlap method were very similar to those selected by logistic regression and discriminant analysis, but overcomes some shortcomings of these methods. VARSEDIG is, therefore, a good alternative by comparison to current classical classification methods for identifying morphometric features that significantly discriminate a taxon and for validating its morphological distinctness from other taxa. As a demonstration of the potential of VARSEDIG for this purpose, we analyze morphological discrimination among some species of the Neotropical freshwater family Characidae.

  19. Compound Selectivity and Target Residence Time of Kinase Inhibitors Studied with Surface Plasmon Resonance.

    PubMed

    Willemsen-Seegers, Nicole; Uitdehaag, Joost C M; Prinsen, Martine B W; de Vetter, Judith R F; de Man, Jos; Sawa, Masaaki; Kawase, Yusuke; Buijsman, Rogier C; Zaman, Guido J R

    2017-02-17

    Target residence time (τ) has been suggested to be a better predictor of the biological activity of kinase inhibitors than inhibitory potency (IC 50 ) in enzyme assays. Surface plasmon resonance binding assays for 46 human protein and lipid kinases were developed. The association and dissociation constants of 80 kinase inhibitor interactions were determined. τ and equilibrium affinity constants (K D ) were calculated to determine kinetic selectivity. Comparison of τ and K D or IC 50 values revealed a strikingly different view on the selectivity of several kinase inhibitors, including the multi-kinase inhibitor ponatinib, which was tested on 10 different kinases. In addition, known pan-Aurora inhibitors resided much longer on Aurora B than on Aurora A, despite having comparable affinity for Aurora A and B. Furthermore, the γ/δ-selective PI3K inhibitor duvelisib and the δ-selective drug idelalisib had similar 20-fold selectivity for δ- over γ-isoform but duvelisib resided much longer on both targets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Inferring genetic interactions via a nonlinear model and an optimization algorithm.

    PubMed

    Chen, Chung-Ming; Lee, Chih; Chuang, Cheng-Long; Wang, Chia-Chang; Shieh, Grace S

    2010-02-26

    Biochemical pathways are gradually becoming recognized as central to complex human diseases and recently genetic/transcriptional interactions have been shown to be able to predict partial pathways. With the abundant information made available by microarray gene expression data (MGED), nonlinear modeling of these interactions is now feasible. Two of the latest advances in nonlinear modeling used sigmoid models to depict transcriptional interaction of a transcription factor (TF) for a target gene, but do not model cooperative or competitive interactions of several TFs for a target. An S-shape model and an optimization algorithm (GASA) were developed to infer genetic interactions/transcriptional regulation of several genes simultaneously using MGED. GASA consists of a genetic algorithm (GA) and a simulated annealing (SA) algorithm, which is enhanced by a steepest gradient descent algorithm to avoid being trapped in local minimum. Using simulated data with various degrees of noise, we studied how GASA with two model selection criteria and two search spaces performed. Furthermore, GASA was shown to outperform network component analysis, the time series network inference algorithm (TSNI), GA with regular GA (GAGA) and GA with regular SA. Two applications are demonstrated. First, GASA is applied to infer a subnetwork of human T-cell apoptosis. Several of the predicted interactions are supported by the literature. Second, GASA was applied to infer the transcriptional factors of 34 cell cycle regulated targets in S. cerevisiae, and GASA performed better than one of the latest advances in nonlinear modeling, GAGA and TSNI. Moreover, GASA is able to predict multiple transcription factors for certain targets, and these results coincide with experiments confirmed data in YEASTRACT. GASA is shown to infer both genetic interactions and transcriptional regulatory interactions well. In particular, GASA seems able to characterize the nonlinear mechanism of transcriptional regulatory

  1. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  2. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    PubMed

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of

  3. Selective inhibitors of zinc-dependent histone deacetylases. Therapeutic targets relevant to cancer.

    PubMed

    Kollar, Jakub; Frecer, Vladimir

    2015-01-01

    Histone deacetylases (HDACs), which act on acetylated histones and/or other non-histone protein substrates, represent validated epigenetic targets for the treatment of cancer and other human diseases. The inhibition of HDAC activity was shown to induce cell cycle arrest, differentiation, apoptosis as well as a decrease in proliferation, angiogenesis, migration, and cell resistance to chemotherapy. Targeting single HDAC isoforms with selective inhibitors will help to reveal the role of individual HDACs in cancer development or uncover further biological consequences of protein acetylation. This review focuses on conventional zinc-containing HDACs. In its first part, the biological role of individual HDACs in various types of cancer is summarized. In the second part, promising HDAC inhibitors showing activity both in enzymatic and cell-based assays are surveyed with an emphasis on the inhibitors selective to the individual HDACs.

  4. Ventrogluteal versus dorsogluteal site selection: A cross-sectional study of muscle and subcutaneous fat thicknesses and an algorithm incorporating demographic and anthropometric data to predict injection outcome.

    PubMed

    Larkin, Theresa A; Ashcroft, Elfriede; Elgellaie, Asmahan; Hickey, Blake A

    2017-06-01

    The dorsogluteal and ventrogluteal intramuscular injection sites both have their use in clinical practice; however, it has not been established in whom one or the other should be preferentially targeted or avoided. There is a need for an evidence-based approach towards site selection for a successful intramuscular injection outcome and to avoid unwanted injection outcomes of inadvertent subcutaneous injection or bone contact. Injection outcome is dependent on injection site subcutaneous fat thickness and muscle thickness; these are likely influenced by gender and anthropometry. To determine whether subcutaneous fat, muscle, and total tissue thicknesses differ between the dorsogluteal and ventrogluteal sites, and whether theoretical injection outcome (intramuscular, subcutaneous, or bone contact) can be predicted by demographic and anthropometric data and described by an algorithm. Cross-sectional study design. University in Australia. 145 volunteers (57% female) of at least 18 years of age recruited through the university community. Anthropometric data was collected and subcutaneous fat and muscle thicknesses were quantified by ultrasonography. Anthropometric differences between theoretical injection outcome groups (bone contact versus intramuscular versus subcutaneous at the ventrogluteal and dorsogluteal sites) was determined for each gender (ANOVA). Multiple regression analysis was conducted to determine the influence of demographic and anthropometric data on theoretical intramuscular injection outcome. An algorithm to guide site selection was developed for each gender, based on the anthropometric measures that best discriminated between injection outcomes. Subcutaneous fat, muscle and total tissue were significantly thicker at the dorsogluteal site than the ventrogluteal site, and subcutaneous fat was significantly thicker in females than males at both sites (all p<0.001); there was no gender difference for muscle or total tissue thickness at either site

  5. Large scale tracking algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For highermore » resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.« less

  6. Application of target costing in machining

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, Bhaskaran; Kokatnur, Ameet; Gupta, Deepak P.

    2004-11-01

    In today's intensely competitive and highly volatile business environment, consistent development of low cost and high quality products meeting the functionality requirements is a key to a company's survival. Companies continuously strive to reduce the costs while still producing quality products to stay ahead in the competition. Many companies have turned to target costing to achieve this objective. Target costing is a structured approach to determine the cost at which a proposed product, meeting the quality and functionality requirements, must be produced in order to generate the desired profits. It subtracts the desired profit margin from the company's selling price to establish the manufacturing cost of the product. Extensive literature review revealed that companies in automotive, electronic and process industries have reaped the benefits of target costing. However target costing approach has not been applied in the machining industry, but other techniques based on Geometric Programming, Goal Programming, and Lagrange Multiplier have been proposed for application in this industry. These models follow a forward approach, by first selecting a set of machining parameters, and then determining the machining cost. Hence in this study we have developed an algorithm to apply the concepts of target costing, which is a backward approach that selects the machining parameters based on the required machining costs, and is therefore more suitable for practical applications in process improvement and cost reduction. A target costing model was developed for turning operation and was successfully validated using practical data.

  7. Selection and collection of multi parameter physiological data for cardiac rhythm diagnostic algorithm development

    NASA Astrophysics Data System (ADS)

    Bostock, J.; Weller, P.; Cooklin, M.

    2010-07-01

    Automated diagnostic algorithms are used in implantable cardioverter-defibrillators (ICD's) to detect abnormal heart rhythms. Algorithms misdiagnose and improved specificity is needed to prevent inappropriate therapy. Knowledge engineering (KE) and artificial intelligence (AI) could improve this. A pilot study of KE was performed with artificial neural network (ANN) as AI system. A case note review analysed arrhythmic events stored in patients ICD memory. 13.2% patients received inappropriate therapy. The best ICD algorithm had sensitivity 1.00, specificity 0.69 (p<0.001 different to gold standard). A subset of data was used to train and test an ANN. A feed-forward, back-propagation network with 7 inputs, a 4 node hidden layer and 1 output had sensitivity 1.00, specificity 0.71 (p<0.001). A prospective study was performed using KE to list arrhythmias, factors and indicators for which measurable parameters were evaluated and results reviewed by a domain expert. Waveforms from electrodes in the heart and thoracic bio-impedance; temperature and motion data were collected from 65 patients during cardiac electrophysiological studies. 5 incomplete datasets were due to technical failures. We concluded that KE successfully guided selection of parameters and ANN produced a usable system and that complex data collection carries greater risk of technical failure, leading to data loss.

  8. Improving threading algorithms for remote homology modeling by combining fragment and template comparisons

    PubMed Central

    Zhou, Hongyi; Skolnick, Jeffrey

    2010-01-01

    In this work, we develop a method called FTCOM for assessing the global quality of protein structural models for targets of medium and hard difficulty (remote homology) produced by structure prediction approaches such as threading or ab initio structure prediction. FTCOM requires the Cα coordinates of full length models and assesses model quality based on fragment comparison and a score derived from comparison of the model to top threading templates. On a set of 361 medium/hard targets, FTCOM was applied to and assessed for its ability to improve upon the results from the SP3, SPARKS, PROSPECTOR_3, and PRO-SP3-TASSER threading algorithms. The average TM-score improves by 5%–10% for the first selected model by the new method over models obtained by the original selection procedure in the respective threading methods. Moreover the number of foldable targets (TM-score ≥0.4) increases from least 7.6% for SP3 to 54% for SPARKS. Thus, FTCOM is a promising approach to template selection. PMID:20455261

  9. Attention blinks for selection, not perception or memory: reading sentences and reporting targets.

    PubMed

    Potter, Mary C; Wyble, Brad; Olejarczyk, Jennifer

    2011-12-01

    In whole report, a sentence presented sequentially at the rate of about 10 words/s can be recalled accurately, whereas if the task is to report only two target words (e.g., red words), the second target suffers an attentional blink if it appears shortly after the first target. If these two tasks are carried out simultaneously, is there an attentional blink, and does it affect both tasks? Here, sentence report was combined with report of two target words (Experiments 1 and 2) or two inserted target digits, Arabic numerals or word digits (Experiments 3 and 4). When participants reported only the targets an attentional blink was always observed. When they reported both the sentence and targets, sentence report was quite accurate but there was an attentional blink in picking out the targets when they were part of the sentence. When targets were extra digits inserted in the sentence there was no blink when viewers also reported the sentence. These results challenge some theories of the attentional blink: Blinks result from online selection, not perception or memory.

  10. Any Two Learning Algorithms Are (Almost) Exactly Identical

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2000-01-01

    This paper shows that if one is provided with a loss function, it can be used in a natural way to specify a distance measure quantifying the similarity of any two supervised learning algorithms, even non-parametric algorithms. Intuitively, this measure gives the fraction of targets and training sets for which the expected performance of the two algorithms differs significantly. Bounds on the value of this distance are calculated for the case of binary outputs and 0-1 loss, indicating that any two learning algorithms are almost exactly identical for such scenarios. As an example, for any two algorithms A and B, even for small input spaces and training sets, for less than 2e(-50) of all targets will the difference between A's and B's generalization performance of exceed 1%. In particular, this is true if B is bagging applied to A, or boosting applied to A. These bounds can be viewed alternatively as telling us, for example, that the simple English phrase 'I expect that algorithm A will generalize from the training set with an accuracy of at least 75% on the rest of the target' conveys 20,000 bytes of information concerning the target. The paper ends by discussing some of the subtleties of extending the distance measure to give a full (non-parametric) differential geometry of the manifold of learning algorithms.

  11. Hyperspectral Imaging for Predicting the Internal Quality of Kiwifruits Based on Variable Selection Algorithms and Chemometric Models.

    PubMed

    Zhu, Hongyan; Chu, Bingquan; Fan, Yangyang; Tao, Xiaoya; Yin, Wenxin; He, Yong

    2017-08-10

    We investigated the feasibility and potentiality of determining firmness, soluble solids content (SSC), and pH in kiwifruits using hyperspectral imaging, combined with variable selection methods and calibration models. The images were acquired by a push-broom hyperspectral reflectance imaging system covering two spectral ranges. Weighted regression coefficients (BW), successive projections algorithm (SPA) and genetic algorithm-partial least square (GAPLS) were compared and evaluated for the selection of effective wavelengths. Moreover, multiple linear regression (MLR), partial least squares regression and least squares support vector machine (LS-SVM) were developed to predict quality attributes quantitatively using effective wavelengths. The established models, particularly SPA-MLR, SPA-LS-SVM and GAPLS-LS-SVM, performed well. The SPA-MLR models for firmness (R pre  = 0.9812, RPD = 5.17) and SSC (R pre  = 0.9523, RPD = 3.26) at 380-1023 nm showed excellent performance, whereas GAPLS-LS-SVM was the optimal model at 874-1734 nm for predicting pH (R pre  = 0.9070, RPD = 2.60). Image processing algorithms were developed to transfer the predictive model in every pixel to generate prediction maps that visualize the spatial distribution of firmness and SSC. Hence, the results clearly demonstrated that hyperspectral imaging has the potential as a fast and non-invasive method to predict the quality attributes of kiwifruits.

  12. Selective whole genome amplification for resequencing target microbial species from complex natural samples.

    PubMed

    Leichty, Aaron R; Brisson, Dustin

    2014-10-01

    Population genomic analyses have demonstrated power to address major questions in evolutionary and molecular microbiology. Collecting populations of genomes is hindered in many microbial species by the absence of a cost effective and practical method to collect ample quantities of sufficiently pure genomic DNA for next-generation sequencing. Here we present a simple method to amplify genomes of a target microbial species present in a complex, natural sample. The selective whole genome amplification (SWGA) technique amplifies target genomes using nucleotide sequence motifs that are common in the target microbe genome, but rare in the background genomes, to prime the highly processive phi29 polymerase. SWGA thus selectively amplifies the target genome from samples in which it originally represented a minor fraction of the total DNA. The post-SWGA samples are enriched in target genomic DNA, which are ideal for population resequencing. We demonstrate the efficacy of SWGA using both laboratory-prepared mixtures of cultured microbes as well as a natural host-microbe association. Targeted amplification of Borrelia burgdorferi mixed with Escherichia coli at genome ratios of 1:2000 resulted in >10(5)-fold amplification of the target genomes with <6.7-fold amplification of the background. SWGA-treated genomic extracts from Wolbachia pipientis-infected Drosophila melanogaster resulted in up to 70% of high-throughput resequencing reads mapping to the W. pipientis genome. By contrast, 2-9% of sequencing reads were derived from W. pipientis without prior amplification. The SWGA technique results in high sequencing coverage at a fraction of the sequencing effort, thus allowing population genomic studies at affordable costs. Copyright © 2014 by the Genetics Society of America.

  13. Capability of the Maximax&Maximin selection operator in the evolutionary algorithm for a nurse scheduling problem

    NASA Astrophysics Data System (ADS)

    Ramli, Razamin; Tein, Lim Huai

    2016-08-01

    A good work schedule can improve hospital operations by providing better coverage with appropriate staffing levels in managing nurse personnel. Hence, constructing the best nurse work schedule is the appropriate effort. In doing so, an improved selection operator in the Evolutionary Algorithm (EA) strategy for a nurse scheduling problem (NSP) is proposed. The smart and efficient scheduling procedures were considered. Computation of the performance of each potential solution or schedule was done through fitness evaluation. The best so far solution was obtained via special Maximax&Maximin (MM) parent selection operator embedded in the EA, which fulfilled all constraints considered in the NSP.

  14. Bio-inspired algorithms applied to molecular docking simulations.

    PubMed

    Heberlé, G; de Azevedo, W F

    2011-01-01

    Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.

  15. Advancing the sensitivity of selected reaction monitoring-based targeted quantitative proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Tujin; Su, Dian; Liu, Tao

    2012-04-01

    Selected reaction monitoring (SRM)—also known as multiple reaction monitoring (MRM)—has emerged as a promising high-throughput targeted protein quantification technology for candidate biomarker verification and systems biology applications. A major bottleneck for current SRM technology, however, is insufficient sensitivity for e.g., detecting low-abundance biomarkers likely present at the pg/mL to low ng/mL range in human blood plasma or serum, or extremely low-abundance signaling proteins in the cells or tissues. Herein we review recent advances in methods and technologies, including front-end immunoaffinity depletion, fractionation, selective enrichment of target proteins/peptides or their posttranslational modifications (PTMs), as well as advances in MS instrumentation, whichmore » have significantly enhanced the overall sensitivity of SRM assays and enabled the detection of low-abundance proteins at low to sub- ng/mL level in human blood plasma or serum. General perspectives on the potential of achieving sufficient sensitivity for detection of pg/mL level proteins in plasma are also discussed.« less

  16. Using a genetic algorithm as an optimal band selector in the mid and thermal infrared (2.5-14 μm) to discriminate vegetation species.

    PubMed

    Ullah, Saleem; Groen, Thomas A; Schlerf, Martin; Skidmore, Andrew K; Nieuwenhuis, Willem; Vaiphasa, Chaichoke

    2012-01-01

    Genetic variation between various plant species determines differences in their physio-chemical makeup and ultimately in their hyperspectral emissivity signatures. The hyperspectral emissivity signatures, on the one hand, account for the subtle physio-chemical changes in the vegetation, but on the other hand, highlight the problem of high dimensionality. The aim of this paper is to investigate the performance of genetic algorithms coupled with the spectral angle mapper (SAM) to identify a meaningful subset of wavebands sensitive enough to discriminate thirteen broadleaved vegetation species from the laboratory measured hyperspectral emissivities. The performance was evaluated using an overall classification accuracy and Jeffries Matusita distance. For the multiple plant species, the targeted bands based on genetic algorithms resulted in a high overall classification accuracy (90%). Concentrating on the pairwise comparison results, the selected wavebands based on genetic algorithms resulted in higher Jeffries Matusita (J-M) distances than randomly selected wavebands did. This study concludes that targeted wavebands from leaf emissivity spectra are able to discriminate vegetation species.

  17. Human long intrinsically disordered protein regions are frequent targets of positive selection.

    PubMed

    Afanasyeva, Arina; Bockwoldt, Mathias; Cooney, Christopher R; Heiland, Ines; Gossmann, Toni I

    2018-06-01

    Intrinsically disordered regions occur frequently in proteins and are characterized by a lack of a well-defined three-dimensional structure. Although these regions do not show a higher order of structural organization, they are known to be functionally important. Disordered regions are rapidly evolving, largely attributed to relaxed purifying selection and an increased role of genetic drift. It has also been suggested that positive selection might contribute to their rapid diversification. However, for our own species, it is currently unknown whether positive selection has played a role during the evolution of these protein regions. Here, we address this question by investigating the evolutionary pattern of more than 6600 human proteins with intrinsically disordered regions and their ordered counterparts. Our comparative approach with data from more than 90 mammalian genomes uses a priori knowledge of disordered protein regions, and we show that this increases the power to detect positive selection by an order of magnitude. We can confirm that human intrinsically disordered regions evolve more rapidly, not only within humans but also across the entire mammalian phylogeny. They have, however, experienced substantial evolutionary constraint, hinting at their fundamental functional importance. We find compelling evidence that disordered protein regions are frequent targets of positive selection and estimate that the relative rate of adaptive substitutions differs fourfold between disordered and ordered protein regions in humans. Our results suggest that disordered protein regions are important targets of genetic innovation and that the contribution of positive selection in these regions is more pronounced than in other protein parts. © 2018 Afanasyeva et al.; Published by Cold Spring Harbor Laboratory Press.

  18. A Spectrally Selective Attenuation Mechanism-Based Kpar Algorithm for Biomass Heating Effect Simulation in the Open Ocean

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Zhang, Xiangguang; Xing, Xiaogang; Ishizaka, Joji; Yu, Zhifeng

    2017-12-01

    Quantifying the diffuse attenuation coefficient of the photosynthetically available radiation (Kpar) can improve our knowledge of euphotic depth (Zeu) and biomass heating effects in the upper layers of oceans. An algorithm to semianalytically derive Kpar from remote sensing reflectance (Rrs) is developed for the global open oceans. This algorithm includes the following two portions: (1) a neural network model for deriving the diffuse attention coefficients (Kd) that considers the residual error in satellite Rrs, and (2) a three band depth-dependent Kpar algorithm (TDKA) for describing the spectrally selective attenuation mechanism of underwater solar radiation in the open oceans. This algorithm is evaluated with both in situ PAR profile data and satellite images, and the results show that it can produce acceptable PAR profile estimations while clearly removing the impacts of satellite residual errors on Kpar estimations. Furthermore, the performance of the TDKA algorithm is evaluated by its applicability in Zeu derivation and mean temperature within a mixed layer depth (TML) simulation, and the results show that it can significantly decrease the uncertainty in both compared with the classical chlorophyll-a concentration-based Kpar algorithm. Finally, the TDKA algorithm is applied in simulating biomass heating effects in the Sargasso Sea near Bermuda, with new Kpar data it is found that the biomass heating effects can lead to a 3.4°C maximum positive difference in temperature in the upper layers but could result in a 0.67°C maximum negative difference in temperature in the deep layers.

  19. Solving TSP problem with improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying

    2018-05-01

    The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.

  20. A 20-Amino Acid Module of Protein Kinase Cϵ Involved in Translocation and Selective Targeting at Cell-Cell Contacts*

    PubMed Central

    Diouf, Barthélémy; Collazos, Alejandra; Labesse, Gilles; Macari, Françoise; Choquet, Armelle; Clair, Philippe; Gauthier-Rouvière, Cécile; Guérineau, Nathalie C.; Jay, Philippe; Hollande, Frédéric; Joubert, Dominique

    2009-01-01

    In the pituitary gland, activated protein kinase C (PKC) isoforms accumulate either selectively at the cell-cell contact (α and ϵ) or at the entire plasma membrane (β1 and δ). The molecular mechanisms underlying these various subcellular locations are not known. Here, we demonstrate the existence within PKCϵ of a cell-cell contact targeting sequence (3CTS) that, upon stimulation, is capable of targeting PKCδ, chimerin-α1, and the PKCϵ C1 domain to the cell-cell contact. We show that this selective targeting of PKCϵ is lost upon overexpression of 3CTS fused to a (R-Ahx-R)4 (where Ahx is 6-aminohexanoic acid) vectorization peptide, reflecting a dominant-negative effect of the overexpressed 3CTS on targeting selectivity. 3CTS contains a putative amphipathic α-helix, a 14-3-3-binding site, and the Glu-374 amino acid, involved in targeting selectivity. We show that the integrity of the α-helix is important for translocation but that 14-3-3 is not involved in targeting selectivity. However, PKCϵ translocation is increased when PKCϵ/14-3-3 interaction is abolished, suggesting that phorbol 12-myristate 13-acetate activation may initiate two sets of PKCϵ functions, those depending on 14-3-3 and those depending on translocation to cell-cell contacts. Thus, 3CTS is involved in the modulation of translocation via its 14-3-3-binding site, in cytoplasmic desequestration via the α-helix, and in selective PKCϵ targeting at the cell-cell contact via Glu-374. PMID:19429675