Science.gov

Sample records for adaptive algorithm based

  1. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  2. Adaptive RED algorithm based on minority game

    NASA Astrophysics Data System (ADS)

    Wei, Jiaolong; Lei, Ling; Qian, Jingjing

    2007-11-01

    With more and more applications appearing and the technology developing in the Internet, only relying on terminal system can not satisfy the complicated demand of QoS network. Router mechanisms must be participated into protecting responsive flows from the non-responsive. Routers mainly use active queue management mechanism (AQM) to avoid congestion. In the point of interaction between the routers, the paper applies minority game to describe the interaction of the users and observes the affection on the length of average queue. The parameters α, β of ARED being hard to confirm, adaptive RED based on minority game can depict the interactions of main body and amend the parameter α, β of ARED to the best. Adaptive RED based on minority game optimizes ARED and realizes the smoothness of average queue length. At the same time, this paper extends the network simulator plat - NS by adding new elements. Simulation has been implemented and the results show that new algorithm can reach the anticipative objects.

  3. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  4. Adaptive NUC algorithm for uncooled IRFPA based on neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Ziji; Jiang, Yadong; Lv, Jian; Zhu, Hongbin

    2010-10-01

    With developments in uncooled infrared plane array (UFPA) technology, many new advanced uncooled infrared sensors are used in defensive weapons, scientific research, industry and commercial applications. A major difference in imaging techniques between infrared IRFPA imaging system and a visible CCD camera is that, IRFPA need nonuniformity correction and dead pixel compensation, we usually called it infrared image pre-processing. Two-point or multi-point correction algorithms based on calibration commonly used may correct the non-uniformity of IRFPAs, but they are limited by pixel linearity and instability. Therefore, adaptive non-uniformity correction techniques are developed. Two of these adaptive non-uniformity correction algorithms are mostly discussed, one is based on temporal high-pass filter, and another is based on neural network. In this paper, a new NUC algorithm based on improved neural networks is introduced, and involves the compare result between improved neural networks and other adaptive correction techniques. A lot of different will discussed in different angle, like correction effects, calculation efficiency, hardware implementation and so on. According to the result and discussion, it could be concluding that the adaptive algorithm offers improved performance compared to traditional calibration mode techniques. This new algorithm not only provides better sensitivity, but also increases the system dynamic range. As the sensor application expended, it will be very useful in future infrared imaging systems.

  5. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  6. An adaptive gyroscope-based algorithm for temporal gait analysis.

    PubMed

    Greene, Barry R; McGrath, Denise; O'Neill, Ross; O'Donovan, Karol J; Burns, Adrian; Caulfield, Brian

    2010-12-01

    Body-worn kinematic sensors have been widely proposed as the optimal solution for portable, low cost, ambulatory monitoring of gait. This study aims to evaluate an adaptive gyroscope-based algorithm for automated temporal gait analysis using body-worn wireless gyroscopes. Gyroscope data from nine healthy adult subjects performing four walks at four different speeds were then compared against data acquired simultaneously using two force plates and an optical motion capture system. Data from a poliomyelitis patient, exhibiting pathological gait walking with and without the aid of a crutch, were also compared to the force plate. Results show that the mean true error between the adaptive gyroscope algorithm and force plate was -4.5 ± 14.4 ms and 43.4 ± 6.0 ms for IC and TC points, respectively, in healthy subjects. Similarly, the mean true error when data from the polio patient were compared against the force plate was -75.61 ± 27.53 ms and 99.20 ± 46.00 ms for IC and TC points, respectively. A comparison of the present algorithm against temporal gait parameters derived from an optical motion analysis system showed good agreement for nine healthy subjects at four speeds. These results show that the algorithm reported here could constitute the basis of a robust, portable, low-cost system for ambulatory monitoring of gait.

  7. A Competency-Based Guided-Learning Algorithm Applied on Adaptively Guiding E-Learning

    ERIC Educational Resources Information Center

    Hsu, Wei-Chih; Li, Cheng-Hsiu

    2015-01-01

    This paper presents a new algorithm called competency-based guided-learning algorithm (CBGLA), which can be applied on adaptively guiding e-learning. Computational process analysis and mathematical derivation of competency-based learning (CBL) were used to develop the CBGLA. The proposed algorithm could generate an effective adaptively guiding…

  8. Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie

    2012-01-01

    Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate.

  9. An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement.

    SciTech Connect

    Edgel, Jared; Benzley, Steven E.; Owen, Steven James

    2010-08-01

    Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.

  10. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  11. Adaptive inpainting algorithm based on DCT induced wavelet regularization.

    PubMed

    Li, Yan-Ran; Shen, Lixin; Suter, Bruce W

    2013-02-01

    In this paper, we propose an image inpainting optimization model whose objective function is a smoothed l(1) norm of the weighted nondecimated discrete cosine transform (DCT) coefficients of the underlying image. By identifying the objective function of the proposed model as a sum of a differentiable term and a nondifferentiable term, we present a basic algorithm inspired by Beck and Teboulle's recent work on the model. Based on this basic algorithm, we propose an automatic way to determine the weights involved in the model and update them in each iteration. The DCT as an orthogonal transform is used in various applications. We view the rows of a DCT matrix as the filters associated with a multiresolution analysis. Nondecimated wavelet transforms with these filters are explored in order to analyze the images to be inpainted. Our numerical experiments verify that under the proposed framework, the filters from a DCT matrix demonstrate promise for the task of image inpainting.

  12. Research of adaptive threshold edge detection algorithm based on statistics canny operator

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Wang, Huaisuo; Huang, Hua

    2015-12-01

    The traditional Canny operator cannot get the optimal threshold in different scene, on this foundation, an improved Canny edge detection algorithm based on adaptive threshold is proposed. The result of the experiment pictures indicate that the improved algorithm can get responsible threshold, and has the better accuracy and precision in the edge detection.

  13. Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.; Mejía Alanís, Francisco Carlos

    2016-07-01

    An accurate technique to perform binocular self-calibration by means of an adaptive genetic algorithm based on a laser line is presented. In this calibration, the genetic algorithm computes the vision parameters through simulated binary crossover (SBX). To carry it out, the genetic algorithm constructs an objective function from the binocular geometry of the laser line projection. Then, the SBX minimizes the objective function via chromosomes recombination. In this algorithm, the adaptive procedure determines the search space via line position to obtain the minimum convergence. Thus, the chromosomes of vision parameters provide the minimization. The approach of the proposed adaptive genetic algorithm is to calibrate and recalibrate the binocular setup without references and physical measurements. This procedure leads to improve the traditional genetic algorithms, which calibrate the vision parameters by means of references and an unknown search space. It is because the proposed adaptive algorithm avoids errors produced by the missing of references. Additionally, the three-dimensional vision is carried out based on the laser line position and vision parameters. The contribution of the proposed algorithm is corroborated by an evaluation of accuracy of binocular calibration, which is performed via traditional genetic algorithms.

  14. Mean-shift tracking algorithm based on adaptive fusion of multi-feature

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Xiao, Yanghui; Wang, Ende; Feng, Junhui

    2015-10-01

    The classic mean-shift tracking algorithm has achieved success in the field of computer vision because of its speediness and efficiency. However, classic mean-shift tracking algorithm would fail to track in some complicated conditions such as some parts of the target are occluded, little color difference between the target and background exists, or sudden change of illumination and so on. In order to solve the problems, an improved algorithm is proposed based on the mean-shift tracking algorithm and adaptive fusion of features. Color, edges and corners of the target are used to describe the target in the feature space, and a method for measuring the discrimination of various features is presented to make feature selection adaptive. Then the improved mean-shift tracking algorithm is introduced based on the fusion of various features. For the purpose of solving the problem that mean-shift tracking algorithm with the single color feature is vulnerable to sudden change of illumination, we eliminate the effects by the fusion of affine illumination model and color feature space which ensures the correctness and stability of target tracking in that condition. Using a group of videos to test the proposed algorithm, the results show that the tracking correctness and stability of this algorithm are better than the mean-shift tracking algorithm with single feature space. Furthermore the proposed algorithm is more robust than the classic algorithm in the conditions of occlusion, target similar with background or illumination change.

  15. A geometry-based adaptive unstructured grid generation algorithm for complex geological media

    NASA Astrophysics Data System (ADS)

    Bahrainian, Seyed Saied; Dezfuli, Alireza Daneh

    2014-07-01

    In this paper a novel unstructured grid generation algorithm is presented that considers the effect of geological features and well locations in grid resolution. The proposed grid generation algorithm presents a strategy for definition and construction of an initial grid based on the geological model, geometry adaptation of geological features, and grid resolution control. The algorithm is applied to seismotectonic map of the Masjed-i-Soleiman reservoir. Comparison of grid results with the “Triangle” program shows a more suitable permeability contrast. Immiscible two-phase flow solutions are presented for a fractured porous media test case using different grid resolutions. Adapted grid on the fracture geometry gave identical results with that of a fine grid. The adapted grid employed 88.2% less CPU time when compared to the solutions obtained by the fine grid.

  16. Detection of Human Impacts by an Adaptive Energy-Based Anisotropic Algorithm

    PubMed Central

    Prado-Velasco, Manuel; Ortiz Marín, Rafael; del Rio Cidoncha, Gloria

    2013-01-01

    Boosted by health consequences and the cost of falls in the elderly, this work develops and tests a novel algorithm and methodology to detect human impacts that will act as triggers of a two-layer fall monitor. The two main requirements demanded by socio-healthcare providers—unobtrusiveness and reliability—defined the objectives of the research. We have demonstrated that a very agile, adaptive, and energy-based anisotropic algorithm can provide 100% sensitivity and 78% specificity, in the task of detecting impacts under demanding laboratory conditions. The algorithm works together with an unsupervised real-time learning technique that addresses the adaptive capability, and this is also presented. The work demonstrates the robustness and reliability of our new algorithm, which will be the basis of a smart falling monitor. This is shown in this work to underline the relevance of the results. PMID:24157505

  17. Multiobjective Image Color Quantization Algorithm Based on Self-Adaptive Hybrid Differential Evolution

    PubMed Central

    Xia, Xuewen

    2016-01-01

    In recent years, some researchers considered image color quantization as a single-objective problem and applied heuristic algorithms to solve it. This paper establishes a multiobjective image color quantization model with intracluster distance and intercluster separation as its objectives. Inspired by a multipopulation idea, a multiobjective image color quantization algorithm based on self-adaptive hybrid differential evolution (MoDE-CIQ) is then proposed to solve this model. Two numerical experiments on four common test images are conducted to analyze the effectiveness and competitiveness of the multiobjective model and the proposed algorithm. PMID:27738423

  18. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms

    PubMed Central

    Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784

  19. A source reconstruction algorithm based on adaptive hp-FEM for bioluminescence tomography.

    PubMed

    Han, Runqiang; Liang, Jimin; Qu, Xiaochao; Hou, Yanbin; Ren, Nunu; Mao, Jingjing; Tian, Jie

    2009-08-17

    As a novel modality of molecular imaging, bioluminescence tomography (BLT) is used to in vivo observe and measure the biological process at cellular and molecular level in small animals. The core issue of BLT is to determine the distribution of internal bioluminescent sources from optical measurements on external surface. In this paper, a new algorithm is presented for BLT source reconstruction based on adaptive hp-finite element method. Using adaptive mesh refinement strategy and intelligent permissible source region, we can obtain more accurate information about the location and density of sources, with the robustness, stability and efficiency improved. Numerical simulations and physical experiment were both conducted to verify the performance of the proposed algorithm, where the optical data on phantom surface were obtained via Monte Carlo simulation and CCD camera detection, respectively. The results represent the merits and potential of our algorithm for BLT source reconstruction.

  20. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    USGS Publications Warehouse

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  1. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  2. An adaptive metamodel-based global optimization algorithm for black-box type problems

    NASA Astrophysics Data System (ADS)

    Jie, Haoxiang; Wu, Yizhong; Ding, Jianwan

    2015-11-01

    In this article, an adaptive metamodel-based global optimization (AMGO) algorithm is presented to solve unconstrained black-box problems. In the AMGO algorithm, a type of hybrid model composed of kriging and augmented radial basis function (RBF) is used as the surrogate model. The weight factors of hybrid model are adaptively selected in the optimization process. To balance the local and global search, a sub-optimization problem is constructed during each iteration to determine the new iterative points. As numerical experiments, six standard two-dimensional test functions are selected to show the distributions of iterative points. The AMGO algorithm is also tested on seven well-known benchmark optimization problems and contrasted with three representative metamodel-based optimization methods: efficient global optimization (EGO), GutmannRBF and hybrid and adaptive metamodel (HAM). The test results demonstrate the efficiency and robustness of the proposed method. The AMGO algorithm is finally applied to the structural design of the import and export chamber of a cycloid gear pump, achieving satisfactory results.

  3. Adaptive local backlight dimming algorithm based on local histogram and image characteristics

    NASA Astrophysics Data System (ADS)

    Nadernejad, Ehsan; Burini, Nino; Korhonen, Jari; Forchhammer, Søren; Mantel, Claire

    2013-02-01

    Liquid Crystal Display (LCDs) with Light Emitting Diode (LED) backlight is a very popular display technology, used for instance in television sets, monitors and mobile phones. This paper presents a new backlight dimming algorithm that exploits the characteristics of the target image, such as the local histograms and the average pixel intensity of each backlight segment, to reduce the power consumption of the backlight and enhance image quality. The local histogram of the pixels within each backlight segment is calculated and, based on this average, an adaptive quantile value is extracted. A classification into three classes based on the average luminance value is performed and, depending on the image luminance class, the extracted information on the local histogram determines the corresponding backlight value. The proposed method has been applied on two modeled screens: one with a high resolution direct-lit backlight, and the other screen with 16 edge-lit backlight segments placed in two columns and eight rows. We have compared the proposed algorithm against several known backlight dimming algorithms by simulations; and the results show that the proposed algorithm provides better trade-off between power consumption and image quality preservation than the other algorithms representing the state of the art among feature based backlight algorithms.

  4. Adaptive continuous twisting algorithm

    NASA Astrophysics Data System (ADS)

    Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid

    2016-09-01

    In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.

  5. Adaptive protection algorithm and system

    DOEpatents

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  6. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    ERIC Educational Resources Information Center

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  7. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  8. RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing

    NASA Astrophysics Data System (ADS)

    Gui, Guan; Xu, Li; Adachi, Fumiyuki

    2014-12-01

    Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.

  9. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  10. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  11. A hybrid skull-stripping algorithm based on adaptive balloon snake models

    NASA Astrophysics Data System (ADS)

    Liu, Hung-Ting; Sheu, Tony W. H.; Chang, Herng-Hua

    2013-02-01

    Skull-stripping is one of the most important preprocessing steps in neuroimage analysis. We proposed a hybrid algorithm based on an adaptive balloon snake model to handle this challenging task. The proposed framework consists of two stages: first, the fuzzy possibilistic c-means (FPCM) is used for voxel clustering, which provides a labeled image for the snake contour initialization. In the second stage, the contour is initialized outside the brain surface based on the FPCM result and evolves under the guidance of the balloon snake model, which drives the contour with an adaptive inward normal force to capture the boundary of the brain. The similarity indices indicate that our method outperformed the BSE and BET methods in skull-stripping the MR image volumes in the IBSR data set. Experimental results show the effectiveness of this new scheme and potential applications in a wide variety of skull-stripping applications.

  12. Phase-distortion correction based on stochastic parallel proportional-integral-derivative algorithm for high-resolution adaptive optics

    NASA Astrophysics Data System (ADS)

    Sun, Yang; Wu, Ke-nan; Gao, Hong; Jin, Yu-qi

    2015-02-01

    A novel optimization method, stochastic parallel proportional-integral-derivative (SPPID) algorithm, is proposed for high-resolution phase-distortion correction in wave-front sensorless adaptive optics (WSAO). To enhance the global search and self-adaptation of stochastic parallel gradient descent (SPGD) algorithm, residual error and its temporal integration of performance metric are added in to incremental control signal's calculation. On the basis of the maximum fitting rate between real wave-front and corrector, a goal value of metric is set as the reference. The residual error of the metric relative to reference is transformed into proportional and integration terms to produce adaptive step size updating law of SPGD algorithm. The adaptation of step size leads blind optimization to desired goal and helps escape from local extrema. Different from conventional proportional-integral -derivative (PID) algorithm, SPPID algorithm designs incremental control signal as PI-by-D for adaptive adjustment of control law in SPGD algorithm. Experiments of high-resolution phase-distortion correction in "frozen" turbulences based on influence function coefficients optimization were carried out respectively using 128-by-128 typed spatial light modulators, photo detector and control computer. Results revealed the presented algorithm offered better performance in both cases. The step size update based on residual error and its temporal integration was justified to resolve severe local lock-in problem of SPGD algorithm used in high -resolution adaptive optics.

  13. An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation

    NASA Astrophysics Data System (ADS)

    Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi

    A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.

  14. Fast intersections on nested tetrahedrons (FINT): An algorithm for adaptive finite element based distributed parameter estimation.

    PubMed

    Lee, Jae Hoon; Joshi, Amit; Sevick-Muraca, Eva M

    2008-01-01

    A variety of biomedical imaging techniques such as optical and fluorescence tomography, electrical impedance tomography, and ultrasound imaging can be cast as inverse problems, wherein image reconstruction involves the estimation of spatially distributed parameter(s) of the PDE system describing the physics of the imaging process. Finite element discretization of imaged domain with tetrahedral elements is a popular way of solving the forward and inverse imaging problems on complicated geometries. A dual-adaptive mesh-based approach wherein, one mesh is used for solving the forward imaging problem and the other mesh used for iteratively estimating the unknown distributed parameter, can result in high resolution image reconstruction at minimum computation effort, if both the meshes are allowed to adapt independently. Till date, no efficient method has been reported to identify and resolve intersection between tetrahedrons in independently refined or coarsened dual meshes. Herein, we report a fast and robust algorithm to identify and resolve intersection of tetrahedrons within nested dual meshes generated by 8-similar subtetrahedron subdivision scheme. The algorithm exploits finite element weight functions and gives rise to a set of weight functions on each vertex of disjoint tetrahedron pieces that completely cover up the intersection region of two tetrahedrons. The procedure enables fully adaptive tetrahedral finite elements by supporting independent refinement and coarsening of each individual mesh while preserving fast identification and resolution of intersection. The computational efficiency of the algorithm is demonstrated by diffuse photon density wave solutions obtained from a single- and a dual-mesh, and by reconstructing a fluorescent inclusion in simulated phantom from boundary frequency domain fluorescence measurements.

  15. Adaptive filter design based on the LMS algorithm for delay elimination in TCR/FC compensators.

    PubMed

    Hooshmand, Rahmat Allah; Torabian Esfahani, Mahdi

    2011-04-01

    Thyristor controlled reactor with fixed capacitor (TCR/FC) compensators have the capability of compensating reactive power and improving power quality phenomena. Delay in the response of such compensators degrades their performance. In this paper, a new method based on adaptive filters (AF) is proposed in order to eliminate delay and increase the response of the TCR compensator. The algorithm designed for the adaptive filters is performed based on the least mean square (LMS) algorithm. In this design, instead of fixed capacitors, band-pass LC filters are used. To evaluate the filter, a TCR/FC compensator was used for nonlinear and time varying loads of electric arc furnaces (EAFs). These loads caused occurrence of power quality phenomena in the supplying system, such as voltage fluctuation and flicker, odd and even harmonics and unbalancing in voltage and current. The above design was implemented in a realistic system model of a steel complex. The simulation results show that applying the proposed control in the TCR/FC compensator efficiently eliminated delay in the response and improved the performance of the compensator in the power system.

  16. Adaptive filter design based on the LMS algorithm for delay elimination in TCR/FC compensators.

    PubMed

    Hooshmand, Rahmat Allah; Torabian Esfahani, Mahdi

    2011-04-01

    Thyristor controlled reactor with fixed capacitor (TCR/FC) compensators have the capability of compensating reactive power and improving power quality phenomena. Delay in the response of such compensators degrades their performance. In this paper, a new method based on adaptive filters (AF) is proposed in order to eliminate delay and increase the response of the TCR compensator. The algorithm designed for the adaptive filters is performed based on the least mean square (LMS) algorithm. In this design, instead of fixed capacitors, band-pass LC filters are used. To evaluate the filter, a TCR/FC compensator was used for nonlinear and time varying loads of electric arc furnaces (EAFs). These loads caused occurrence of power quality phenomena in the supplying system, such as voltage fluctuation and flicker, odd and even harmonics and unbalancing in voltage and current. The above design was implemented in a realistic system model of a steel complex. The simulation results show that applying the proposed control in the TCR/FC compensator efficiently eliminated delay in the response and improved the performance of the compensator in the power system. PMID:21193194

  17. Tensor dissimilarity based adaptive seeding algorithm for DT-MRI visualization with streamtubes

    NASA Astrophysics Data System (ADS)

    Weldeselassie, Yonas T.; Hamarneh, Ghassan; Weiskopf, Daniel

    2007-03-01

    In this paper, we propose an adaptive seeding strategy for visualization of diffusion tensor magnetic resonance imaging (DT-MRI) data using streamtubes. DT-MRI is a medical imaging modality that captures unique water diffusion properties and fiber orientation information of the imaged tissues. Visualizing DT-MRI data using streamtubes has the advantage that not only the anisotropic nature of the diffusion is visualized but also the underlying anatomy of biological structures is revealed. This makes streamtubes significant for the analysis of fibrous tissues in medical images. In order to avoid rendering multiple similar streamtubes, an adaptive seeding strategy is employed which takes into account similarity of tensors in a given region. The goal is to automate the process of generating seed points such that regions with dissimilar tensors are assigned more seed points compared to regions with similar tensors. The algorithm is based on tensor dissimilarity metrics that take into account both diffusion magnitudes and directions to optimize the seeding positions and density of streamtubes in order to reduce the visual clutter. Two recent advances in tensor calculus and tensor dissimilarity metrics are utilized: the Log-Euclidean and the J-divergence. Results show that adaptive seeding not only helps to cull unnecessary streamtubes that would obscure visualization but also do so without having to compute the culled streamtubes, which makes the visualization process faster.

  18. Research on algorithms for adaptive antenna arrays

    NASA Astrophysics Data System (ADS)

    Widrow, B.; Newman, W.; Gooch, R.; Duvall, K.; Shur, D.

    1981-08-01

    The fundamental efficiency of adaptive algorithms is analyzed. It is found that noise in the adaptive weights increases with convergence speed. This causes loss in mean-square-error performance. Efficiency is considered from the point of view of misadjustment versus speed of convergence. A new version of the LMS algorithm based on Newton's method is analyzed and shown to make maximally efficient use of real-time input data. The performance of this algorithm is not affected by eigenvalue disparity. Practical algorithms can be devised that closely approximate Newton's method. In certain cases, the steepest descent version of LMS performs as well as Newton's method. The efficiency of adaptive algorithms with nonstationary input environments is analyzed where signals, jammers, and background noises can be of a transient and nonstationary nature. A new adaptive filtering method for broadband adaptive beamforming is described which uses both poles and zeros in the adaptive signal filtering paths from the antenna elements to the final array output.

  19. An adaptive algorithm for noise rejection.

    PubMed

    Lovelace, D E; Knoebel, S B

    1978-01-01

    An adaptive algorithm for the rejection of noise artifact in 24-hour ambulatory electrocardiographic recordings is described. The algorithm is based on increased amplitude distortion or increased frequency of fluctuations associated with an episode of noise artifact. The results of application of the noise rejection algorithm on a high noise population of test tapes are discussed.

  20. Performance study of LMS based adaptive algorithms for unknown system identification

    SciTech Connect

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-10

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  1. Performance study of LMS based adaptive algorithms for unknown system identification

    NASA Astrophysics Data System (ADS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  2. Heart Motion Prediction Based on Adaptive Estimation Algorithms for Robotic Assisted Beating Heart Surgery

    PubMed Central

    Tuna, E. Erdem; Franke, Timothy J.; Bebek, Özkan; Shiose, Akira; Fukamachi, Kiyotaka; Çavuşoğlu, M. Cenk

    2013-01-01

    Robotic assisted beating heart surgery aims to allow surgeons to operate on a beating heart without stabilizers as if the heart is stationary. The robot actively cancels heart motion by closely following a point of interest (POI) on the heart surface—a process called Active Relative Motion Canceling (ARMC). Due to the high bandwidth of the POI motion, it is necessary to supply the controller with an estimate of the immediate future of the POI motion over a prediction horizon in order to achieve sufficient tracking accuracy. In this paper, two least-square based prediction algorithms, using an adaptive filter to generate future position estimates, are implemented and studied. The first method assumes a linear system relation between the consecutive samples in the prediction horizon. On the contrary, the second method performs this parametrization independently for each point over the whole the horizon. The effects of predictor parameters and variations in heart rate on tracking performance are studied with constant and varying heart rate data. The predictors are evaluated using a 3 degrees of freedom test-bed and prerecorded in-vivo motion data. Then, the one-step prediction and tracking performances of the presented approaches are compared with an Extended Kalman Filter predictor. Finally, the essential features of the proposed prediction algorithms are summarized. PMID:23976889

  3. Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution.

    PubMed

    Wang, Daifa; Song, Xiaolei; Bai, Jing

    2007-07-23

    Fluorescence molecular tomography (FMT) has become an important method for in-vivo imaging of small animals. It has been widely used for tumor genesis, cancer detection, metastasis, drug discovery, and gene therapy. In this study, an algorithm for FMT is proposed to obtain accurate and fast reconstruction by combining an adaptive mesh refinement technique and an analytical solution of diffusion equation. Numerical studies have been performed on a parallel plate FMT system with matching fluid. The reconstructions obtained show that the algorithm is efficient in computation time, and they also maintain image quality.

  4. Multi-focus image fusion algorithm based on adaptive PCNN and wavelet transform

    NASA Astrophysics Data System (ADS)

    Wu, Zhi-guo; Wang, Ming-jia; Han, Guang-liang

    2011-08-01

    Being an efficient method of information fusion, image fusion has been used in many fields such as machine vision, medical diagnosis, military applications and remote sensing. In this paper, Pulse Coupled Neural Network (PCNN) is introduced in this research field for its interesting properties in image processing, including segmentation, target recognition et al. and a novel algorithm based on PCNN and Wavelet Transform for Multi-focus image fusion is proposed. First, the two original images are decomposed by wavelet transform. Then, based on the PCNN, a fusion rule in the Wavelet domain is given. This algorithm uses the wavelet coefficient in each frequency domain as the linking strength, so that its value can be chosen adaptively. Wavelet coefficients map to the range of image gray-scale. The output threshold function attenuates to minimum gray over time. Then all pixels of image get the ignition. So, the output of PCNN in each iteration time is ignition wavelet coefficients of threshold strength in different time. At this moment, the sequences of ignition of wavelet coefficients represent ignition timing of each neuron. The ignition timing of PCNN in each neuron is mapped to corresponding image gray-scale range, which is a picture of ignition timing mapping. Then it can judge the targets in the neuron are obvious features or not obvious. The fusion coefficients are decided by the compare-selection operator with the firing time gradient maps and the fusion image is reconstructed by wavelet inverse transform. Furthermore, by this algorithm, the threshold adjusting constant is estimated by appointed iteration number. Furthermore, In order to sufficient reflect order of the firing time, the threshold adjusting constant αΘ is estimated by appointed iteration number. So after the iteration achieved, each of the wavelet coefficient is activated. In order to verify the effectiveness of proposed rules, the experiments upon Multi-focus image are done. Moreover

  5. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    SciTech Connect

    Poynee, L A

    2003-05-06

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation.

  6. Blind Adaptive Interference Suppression Based on Set-Membership Constrained Constant-Modulus Algorithms With Dynamic Bounds

    NASA Astrophysics Data System (ADS)

    de Lamare, Rodrigo C.; Diniz, Paulo S. R.

    2013-03-01

    This work presents blind constrained constant modulus (CCM) adaptive algorithms based on the set-membership filtering (SMF) concept and incorporates dynamic bounds {for interference suppression} applications. We develop stochastic gradient and recursive least squares type algorithms based on the CCM design criterion in accordance with the specifications of the SMF concept. We also propose a blind framework that includes channel and amplitude estimators that take into account parameter estimation dependency, multiple access interference (MAI) and inter-symbol interference (ISI) to address the important issue of bound specification in multiuser communications. A convergence and tracking analysis of the proposed algorithms is carried out along with the development of analytical expressions to predict their performance. Simulations for a number of scenarios of interest with a DS-CDMA system show that the proposed algorithms outperform previously reported techniques with a smaller number of parameter updates and a reduced risk of overbounding or underbounding.

  7. A novel kernel extreme learning machine algorithm based on self-adaptive artificial bee colony optimisation strategy

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Ji, Jin-Chao

    2016-04-01

    In this paper, we propose a novel learning algorithm, named SABC-MKELM, based on a kernel extreme learning machine (KELM) method for single-hidden-layer feedforward networks. In SABC-MKELM, the combination of Gaussian kernels is used as the activate function of KELM instead of simple fixed kernel learning, where the related parameters of kernels and the weights of kernels can be optimised by a novel self-adaptive artificial bee colony (SABC) approach simultaneously. SABC-MKELM outperforms six other state-of-the-art approaches in general, as it could effectively determine solution updating strategies and suitable parameters to produce a flexible kernel function involved in SABC. Simulations have demonstrated that the proposed algorithm not only self-adaptively determines suitable parameters and solution updating strategies learning from the previous experiences, but also achieves better generalisation performances than several related methods, and the results show good stability of the proposed algorithm.

  8. A dynamic control algorithm based on physiological parameters and wearable interfaces for adaptive ventricular assist devices.

    PubMed

    Tortora, G; Fontana, R; Argiolas, S; Vatteroni, M; Dario, P; Trivella, M G

    2015-08-01

    In this work we present an innovative algorithm for the dynamic control of ventricular assist devices (VADs), based on the acquisition of continuous physiological and functional parameters such as heart rate, blood oxygenation, temperature, and patient movements. Such parameters are acquired by wearable devices (MagIC & Winpack) and sensors implanted close to the VAD. The aim of the proposed algorithm is to dynamically control the hydraulic power of the VAD as a function of the detected parameters, patient's activity and emotional status. In this way, the cardiac dynamics regulated by the proposed autoregulation control algorithm for sensorized VADs, thus providing new therapy approaches for heart failure. PMID:26737403

  9. A dynamic control algorithm based on physiological parameters and wearable interfaces for adaptive ventricular assist devices.

    PubMed

    Tortora, G; Fontana, R; Argiolas, S; Vatteroni, M; Dario, P; Trivella, M G

    2015-08-01

    In this work we present an innovative algorithm for the dynamic control of ventricular assist devices (VADs), based on the acquisition of continuous physiological and functional parameters such as heart rate, blood oxygenation, temperature, and patient movements. Such parameters are acquired by wearable devices (MagIC & Winpack) and sensors implanted close to the VAD. The aim of the proposed algorithm is to dynamically control the hydraulic power of the VAD as a function of the detected parameters, patient's activity and emotional status. In this way, the cardiac dynamics regulated by the proposed autoregulation control algorithm for sensorized VADs, thus providing new therapy approaches for heart failure.

  10. Guided filter and adaptive learning rate based non-uniformity correction algorithm for infrared focal plane array

    NASA Astrophysics Data System (ADS)

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    Imaging non-uniformity of infrared focal plane array (IRFPA) behaves as fixed-pattern noise superimposed on the image, which affects the imaging quality of infrared system seriously. In scene-based non-uniformity correction methods, the drawbacks of ghosting artifacts and image blurring affect the sensitivity of the IRFPA imaging system seriously and decrease the image quality visibly. This paper proposes an improved neural network non-uniformity correction method with adaptive learning rate. On the one hand, using guided filter, the proposed algorithm decreases the effect of ghosting artifacts. On the other hand, due to the inappropriate learning rate is the main reason of image blurring, the proposed algorithm utilizes an adaptive learning rate with a temporal domain factor to eliminate the effect of image blurring. In short, the proposed algorithm combines the merits of the guided filter and the adaptive learning rate. Several real and simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. The experiment results indicate that the proposed algorithm can not only reduce the non-uniformity with less ghosting artifacts but also overcome the problems of image blurring in static areas.

  11. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.

    PubMed

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-01-01

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363

  12. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.

    PubMed

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-03-11

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.

  13. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU

    PubMed Central

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-01-01

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363

  14. Strategies to overcome photobleaching in algorithm-based adaptive optics for nonlinear in-vivo imaging.

    PubMed

    Caroline Müllenbroich, M; McGhee, Ewan J; Wright, Amanda J; Anderson, Kurt I; Mathieson, Keith

    2014-01-01

    We have developed a nonlinear adaptive optics microscope utilizing a deformable membrane mirror (DMM) and demonstrated its use in compensating for system- and sample-induced aberrations. The optimum shape of the DMM was determined with a random search algorithm optimizing on either two photon fluorescence or second harmonic signals as merit factors. We present here several strategies to overcome photobleaching issues associated with lengthy optimization routines by adapting the search algorithm and the experimental methodology. Optimizations were performed on extrinsic fluorescent dyes, fluorescent beads loaded into organotypic tissue cultures and the intrinsic second harmonic signal of these cultures. We validate the approach of using these preoptimized mirror shapes to compile a robust look-up table that can be applied for imaging over several days and through a variety of tissues. In this way, the photon exposure to the fluorescent cells under investigation is limited to imaging. Using our look-up table approach, we show signal intensity improvement factors ranging from 1.7 to 4.1 in organotypic tissue cultures and freshly excised mouse tissue. Imaging zebrafish in vivo, we demonstrate signal improvement by a factor of 2. This methodology is easily reproducible and could be applied to many photon starved experiments, for example fluorescent life time imaging, or when photobleaching is a concern.

  15. Fast adaptive OFDM-PON over single fiber loopback transmission using dynamic rate adaptation-based algorithm for channel performance improvement

    NASA Astrophysics Data System (ADS)

    Kartiwa, Iwa; Jung, Sang-Min; Hong, Moon-Ki; Han, Sang-Kook

    2014-03-01

    In this paper, we propose a novel fast adaptive approach that was applied to an OFDM-PON 20-km single fiber loopback transmission system to improve channel performance in term of stabilized BER below 2 × 10-3 and higher throughput beyond 10 Gb/s. The upstream transmission is performed through light source-seeded modulation using 1-GHz RSOA at the ONU. Experimental results indicated that the dynamic rate adaptation algorithm based on greedy Levin-Campello could be an effective solution to mitigate channel instability and data rate degradation caused by the Rayleigh back scattering effect and inefficient resource subcarrier allocation.

  16. An intelligent computational algorithm based on neural network for spatial data mining in adaptability evaluation

    NASA Astrophysics Data System (ADS)

    Miao, Zuohua; Xu, Hong; Chen, Yong; Zeng, Xiangyang

    2009-10-01

    Back-propagation neural network model (BPNN) is an intelligent computational model based on stylebook learning. This model is different from traditional adaptability symbolic logic reasoning method based on knowledge and rules. At the same time, BPNN model has shortcoming such as: slowly convergence speed and partial minimum. During the process of adaptability evaluation, the factors were diverse, complicated and uncertain, so an effectual model should adopt the technique of data mining method and fuzzy logical technology. In this paper, the author ameliorated the backpropagation of BPNN and applied fuzzy logical theory for dynamic inference of fuzzy rules. Authors also give detail description on training and experiment process of the novel model.

  17. Model parameter adaption-based multi-model algorithm for extended object tracking using a random matrix.

    PubMed

    Li, Borui; Mu, Chundi; Han, Shuli; Bai, Tianming

    2014-01-01

    Traditional object tracking technology usually regards the target as a point source object. However, this approximation is no longer appropriate for tracking extended objects such as large targets and closely spaced group objects. Bayesian extended object tracking (EOT) using a random symmetrical positive definite (SPD) matrix is a very effective method to jointly estimate the kinematic state and physical extension of the target. The key issue in the application of this random matrix-based EOT approach is to model the physical extension and measurement noise accurately. Model parameter adaptive approaches for both extension dynamic and measurement noise are proposed in this study based on the properties of the SPD matrix to improve the performance of extension estimation. An interacting multi-model algorithm based on model parameter adaptive filter using random matrix is also presented. Simulation results demonstrate the effectiveness of the proposed adaptive approaches and multi-model algorithm. The estimation performance of physical extension is better than the other algorithms, especially when the target maneuvers. The kinematic state estimation error is lower than the others as well. PMID:24763252

  18. Model parameter adaption-based multi-model algorithm for extended object tracking using a random matrix.

    PubMed

    Li, Borui; Mu, Chundi; Han, Shuli; Bai, Tianming

    2014-04-24

    Traditional object tracking technology usually regards the target as a point source object. However, this approximation is no longer appropriate for tracking extended objects such as large targets and closely spaced group objects. Bayesian extended object tracking (EOT) using a random symmetrical positive definite (SPD) matrix is a very effective method to jointly estimate the kinematic state and physical extension of the target. The key issue in the application of this random matrix-based EOT approach is to model the physical extension and measurement noise accurately. Model parameter adaptive approaches for both extension dynamic and measurement noise are proposed in this study based on the properties of the SPD matrix to improve the performance of extension estimation. An interacting multi-model algorithm based on model parameter adaptive filter using random matrix is also presented. Simulation results demonstrate the effectiveness of the proposed adaptive approaches and multi-model algorithm. The estimation performance of physical extension is better than the other algorithms, especially when the target maneuvers. The kinematic state estimation error is lower than the others as well.

  19. A graph based algorithm for adaptable dynamic airspace configuration for NextGen

    NASA Astrophysics Data System (ADS)

    Savai, Mehernaz P.

    The National Airspace System (NAS) is a complicated large-scale aviation network, consisting of many static sectors wherein each sector is controlled by one or more controllers. The main purpose of the NAS is to enable safe and prompt air travel in the U.S. However, such static configuration of sectors will not be able to handle the continued growth of air travel which is projected to be more than double the current traffic by 2025. Under the initiative of the Next Generation of Air Transportation system (NextGen), the main objective of Adaptable Dynamic Airspace Configuration (ADAC) is that the sectors should change to the changing traffic so as to reduce the controller workload variance with time while increasing the throughput. Change in the resectorization should be such that there is a minimal increase in exchange of air traffic among controllers. The benefit of a new design (improvement in workload balance, etc.) should sufficiently exceed the transition cost, in order to deserve a change. This leads to the analysis of the concept of transition workload which is the cost associated with a transition from one sectorization to another. Given two airspace configurations, a transition workload metric which considers the air traffic as well as the geometry of the airspace is proposed. A solution to reduce this transition workload is also discussed. The algorithm is specifically designed to be implemented for the Dynamic Airspace Configuration (DAC) Algorithm. A graph model which accurately represents the air route structure and air traffic in the NAS is used to formulate the airspace configuration problem. In addition, a multilevel graph partitioning algorithm is developed for Dynamic Airspace Configuration which partitions the graph model of airspace with given user defined constraints and hence provides the user more flexibility and control over various partitions. In terms of air traffic management, vertices represent airports and waypoints. Some of the major

  20. Adaptive link selection algorithms for distributed estimation

    NASA Astrophysics Data System (ADS)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  1. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    NASA Astrophysics Data System (ADS)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  2. Optree: a learning-based adaptive watershed algorithm for neuron segmentation.

    PubMed

    Uzunbaş, Mustafa Gökhan; Chen, Chao; Metaxas, Dimitris

    2014-01-01

    We present a new algorithm for automatic and interactive segmentation of neuron structures from electron microscopy (EM) images. Our method selects a collection of nodes from the watershed mergng tree as the proposed segmentation. This is achieved by building a onditional random field (CRF) whose underlying graph is the merging tree. The maximum a posteriori (MAP) prediction of the CRF is the output segmentation. Our algorithm outperforms state-of-the-art methods. Both the inference and the training are very efficient as the graph is tree-structured. Furthermore, we develop an interactive segmentation framework which selects uncertain regions for a user to proofread. The uncertainty is measured by the marginals of the graphical model. Based on user corrections, our framework modifies the merging tree and thus improves the segmentation globally. PMID:25333106

  3. Streamlining algorithms for complete adaptation

    NASA Technical Reports Server (NTRS)

    Erickson, J. C., Jr. (Editor); Chevallier, J. P.; Goodyer, Michael J.; Hornung, Hans G.; Mignosi, Andre; Sears, William R.; Smith, J.; Wedemeyer, Erich H.

    1990-01-01

    For purposes of the adaptive-wall algorithms to be described, the modern era is considered to have begun with the simultaneous, independent recognition of the concept of matching an experimental inner flow across an interface to a computed outer flow by Chevallier, Ferri, Goodyer, Lissaman, Rubbert, and Sears. Fundamental investigations of the adaptive-wall matching concept by means of numerical simulations and theoretical considerations are described. An overview of the development and operation of 2D adaptive-wall facilities from about 1970 until the present is given, followed by similar material for 3D adaptive-wall facilities from approximately 1978 until the present. A general formulation of adaptation strategy is presented, with a theoretical basis for adaptation followed by 2D flexible, impermeable-wall applications; 2D ventilated-wall applications; 3D flexible, impermeable-wall applications; and 3D ventilated-wall applications. Representative experimental and 3D results are given, with 2D, followed by a discussion of limitations and open questions.

  4. An adaptive integrated algorithm for noninvasive fetal ECG separation and noise reduction based on ICA-EEMD-WS.

    PubMed

    Liu, Guangchen; Luan, Yihui

    2015-11-01

    High-resolution fetal electrocardiogram (FECG) plays an important role in assisting physicians to detect fetal changes in the womb and to make clinical decisions. However, in real situations, clear FECG is difficult to extract because it is usually overwhelmed by the dominant maternal ECG and other contaminated noise such as baseline wander, high-frequency noise. In this paper, we proposed a novel integrated adaptive algorithm based on independent component analysis (ICA), ensemble empirical mode decomposition (EEMD), and wavelet shrinkage (WS) denoising, denoted as ICA-EEMD-WS, for FECG separation and noise reduction. First, ICA algorithm was used to separate the mixed abdominal ECG signal and to obtain the noisy FECG. Second, the noise in FECG was reduced by a three-step integrated algorithm comprised of EEMD, useful subcomponents statistical inference and WS processing, and partial reconstruction for baseline wander reduction. Finally, we evaluate the proposed algorithm using simulated data sets. The results indicated that the proposed ICA-EEMD-WS outperformed the conventional algorithms in signal denoising. PMID:26429348

  5. A feature extraction method of the particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization for Brillouin scattering spectra

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Zhao, Yu; Fu, Xinghu; Xu, Jinrui

    2016-10-01

    A novel particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization is proposed for extracting the features of Brillouin scattering spectra. Firstly, the adaptive inertia weight parameter of the velocity is introduced to the basic particle swarm algorithm. Based on the current iteration number of particles and the adaptation value, the algorithm can change the weight coefficient and adjust the iteration speed of searching space for particles, so the local optimization ability can be enhanced. Secondly, the logical self-mapping chaotic search is carried out by using the chaos optimization in particle swarm optimization algorithm, which makes the particle swarm optimization algorithm jump out of local optimum. The novel algorithm is compared with finite element analysis-Levenberg Marquardt algorithm, particle swarm optimization-Levenberg Marquardt algorithm and particle swarm optimization algorithm by changing the linewidth, the signal-to-noise ratio and the linear weight ratio of Brillouin scattering spectra. Then the algorithm is applied to the feature extraction of Brillouin scattering spectra in different temperatures. The simulation analysis and experimental results show that this algorithm has a high fitting degree and small Brillouin frequency shift error for different linewidth, SNR and linear weight ratio. Therefore, this algorithm can be applied to the distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can effectively improve the accuracy of Brillouin frequency shift extraction.

  6. An Adaptive Wavelet-Based Denoising Algorithm for Enhancing Speech in Non-stationary Noise Environment

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Ching

    Traditional wavelet-based speech enhancement algorithms are ineffective in the presence of highly non-stationary noise because of the difficulties in the accurate estimation of the local noise spectrum. In this paper, a simple method of noise estimation employing the use of a voice activity detector is proposed. We can improve the output of a wavelet-based speech enhancement algorithm in the presence of random noise bursts according to the results of VAD decision. The noisy speech is first preprocessed using bark-scale wavelet packet decomposition (BSWPD) to convert a noisy signal into wavelet coefficients (WCs). It is found that the VAD using bark-scale spectral entropy, called as BS-Entropy, parameter is superior to other energy-based approach especially in variable noise-level. The wavelet coefficient threshold (WCT) of each subband is then temporally adjusted according to the result of VAD approach. In a speech-dominated frame, the speech is categorized into either a voiced frame or an unvoiced frame. A voiced frame possesses a strong tone-like spectrum in lower subbands, so that the WCs of lower-band must be reserved. On the contrary, the WCT tends to increase in lower-band if the speech is categorized as unvoiced. In a noise-dominated frame, the background noise can be almost completely removed by increasing the WCT. The objective and subjective experimental results are then used to evaluate the proposed system. The experiments show that this algorithm is valid on various noise conditions, especially for color noise and non-stationary noise conditions.

  7. The adaptive dynamic community detection algorithm based on the non-homogeneous random walking

    NASA Astrophysics Data System (ADS)

    Xin, Yu; Xie, Zhi-Qiang; Yang, Jing

    2016-05-01

    With the changing of the habit and custom, people's social activity tends to be changeable. It is required to have a community evolution analyzing method to mine the dynamic information in social network. For that, we design the random walking possibility function and the topology gain function to calculate the global influence matrix of the nodes. By the analysis of the global influence matrix, the clustering directions of the nodes can be obtained, thus the NRW (Non-Homogeneous Random Walk) method for detecting the static overlapping communities can be established. We design the ANRW (Adaptive Non-Homogeneous Random Walk) method via adapting the nodes impacted by the dynamic events based on the NRW. The ANRW combines the local community detection with dynamic adaptive adjustment to decrease the computational cost for ANRW. Furthermore, the ANRW treats the node as the calculating unity, thus the running manner of the ANRW is suitable to the parallel computing, which could meet the requirement of large dataset mining. Finally, by the experiment analysis, the efficiency of ANRW on dynamic community detection is verified.

  8. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  9. An Adaptive Path Planning Algorithm for Cooperating Unmanned Air Vehicles

    SciTech Connect

    Cunningham, C.T.; Roberts, R.S.

    2000-09-12

    An adaptive path planning algorithm is presented for cooperating Unmanned Air Vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.

  10. Adaptive path planning algorithm for cooperating unmanned air vehicles

    SciTech Connect

    Cunningham, C T; Roberts, R S

    2001-02-08

    An adaptive path planning algorithm is presented for cooperating Unmanned Air Vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.

  11. Segmentation of heterogeneous or small FDG PET positive tissue based on a 3D-locally adaptive random walk algorithm.

    PubMed

    Onoma, D P; Ruan, S; Thureau, S; Nkhali, L; Modzelewski, R; Monnehan, G A; Vera, P; Gardin, I

    2014-12-01

    A segmentation algorithm based on the random walk (RW) method, called 3D-LARW, has been developed to delineate small tumors or tumors with a heterogeneous distribution of FDG on PET images. Based on the original algorithm of RW [1], we propose an improved approach using new parameters depending on the Euclidean distance between two adjacent voxels instead of a fixed one and integrating probability densities of labels into the system of linear equations used in the RW. These improvements were evaluated and compared with the original RW method, a thresholding with a fixed value (40% of the maximum in the lesion), an adaptive thresholding algorithm on uniform spheres filled with FDG and FLAB method, on simulated heterogeneous spheres and on clinical data (14 patients). On these three different data, 3D-LARW has shown better segmentation results than the original RW algorithm and the three other methods. As expected, these improvements are more pronounced for the segmentation of small or tumors having heterogeneous FDG uptake.

  12. Acceleration amplitude-phase regulation for electro-hydraulic servo shaking table based on LMS adaptive filtering algorithm

    NASA Astrophysics Data System (ADS)

    Yao, Jianjun; Di, Duotao; Jiang, Guilin; Gao, Shuang

    2012-10-01

    Electro-hydraulic servo shaking table usually requires good control performance for acceleration replication. The poles of the electro-hydraulic servo shaking table are placed by three-variable control method using pole placement theory. The system frequency band is thus extended and the system stability is also enhanced. The phase delay and amplitude attenuation phenomenon occurs in electro-hydraulic servo shaking table corresponding to an acceleration sinusoidal input. The method for phase delay and amplitude attenuation elimination based on LMS adaptive filtering algorithm is proposed here. The task is accomplished by adjusting the weights using LMS adaptive filtering algorithm when there exits phase delay and amplitude attenuation between the input and its corresponding acceleration response. The reference input is weighted in such a way that it makes the system output track the input efficiently. The weighted input signal is inputted to the control system such that the output phase delay and amplitude attenuation are all cancelled. The above concept is used as a basis for the development of amplitude-phase regulation (APR) algorithm. The method does not need to estimate the system model and has good real-time performance. Experimental results demonstrate the efficiency and validity of the proposed APR control scheme.

  13. Cerebellar-inspired algorithm for adaptive control of nonlinear dielectric elastomer-based artificial muscle

    PubMed Central

    Assaf, Tareq; Rossiter, Jonathan M.; Porrill, John

    2016-01-01

    Electroactive polymer actuators are important for soft robotics, but can be difficult to control because of compliance, creep and nonlinearities. Because biological control mechanisms have evolved to deal with such problems, we investigated whether a control scheme based on the cerebellum would be useful for controlling a nonlinear dielectric elastomer actuator, a class of artificial muscle. The cerebellum was represented by the adaptive filter model, and acted in parallel with a brainstem, an approximate inverse plant model. The recurrent connections between the two allowed for direct use of sensory error to adjust motor commands. Accurate tracking of a displacement command in the actuator's nonlinear range was achieved by either semi-linear basis functions in the cerebellar model or semi-linear functions in the brainstem corresponding to recruitment in biological muscle. In addition, allowing transfer of training between cerebellum and brainstem as has been observed in the vestibulo-ocular reflex prevented the steady increase in cerebellar output otherwise required to deal with creep. The extensibility and relative simplicity of the cerebellar-based adaptive-inverse control scheme suggests that it is a plausible candidate for controlling this type of actuator. Moreover, its performance highlights important features of biological control, particularly nonlinear basis functions, recruitment and transfer of training. PMID:27655667

  14. 3D Continuum Radiative Transfer. An adaptive grid construction algorithm based on the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Niccolini, G.; Alcolea, J.

    Solving the radiative transfer problem is a common problematic to may fields in astrophysics. With the increasing angular resolution of spatial or ground-based telescopes (VLTI, HST) but also with the next decade instruments (NGST, ALMA, ...), astrophysical objects reveal and will certainly reveal complex spatial structures. Consequently, it is necessary to develop numerical tools being able to solve the radiative transfer equation in three dimensions in order to model and interpret these observations. I present a 3D radiative transfer program, using a new method for the construction of an adaptive spatial grid, based on the Monte Claro method. With the help of this tools, one can solve the continuum radiative transfer problem (e.g. a dusty medium), computes the temperature structure of the considered medium and obtain the flux of the object (SED and images).

  15. Genetic algorithms in adaptive fuzzy control

    NASA Technical Reports Server (NTRS)

    Karr, C. Lucas; Harper, Tony R.

    1992-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.

  16. The Research of Solution to the Problems of Complex Task Scheduling Based on Self-adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen

    Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.

  17. Ultimately accurate SRAF replacement for practical phases using an adaptive search algorithm based on the optimal gradient method

    NASA Astrophysics Data System (ADS)

    Maeda, Shimon; Nosato, Hirokazu; Matsunawa, Tetsuaki; Miyairi, Masahiro; Nojima, Shigeki; Tanaka, Satoshi; Sakanashi, Hidenori; Murakawa, Masahiro; Saito, Tamaki; Higuchi, Tetsuya; Inoue, Soichi

    2010-04-01

    SRAF (Sub Resolution Assist Feature) technique has been widely used for DOF enhancement. Below 40nm design node, even in the case of using the SRAF technique, the resolution limit is approached due to the use of hyper NA imaging or low k1 lithography conditions especially for the contact layer. As a result, complex layout patterns or random patterns like logic data or intermediate pitch patterns become increasingly sensitive to photo-resist pattern fidelity. This means that the need for more accurate resolution technique is increasing in order to cope with lithographic patterning fidelity issues in low k1 lithography conditions. To face with these issues, new SRAF technique like model based SRAF using an interference map or inverse lithography technique has been proposed. But these approaches don't have enough assurance for accuracy or performance, because the ideal mask generated by these techniques is lost when switching to a manufacturable mask with Manhattan structures. As a result it might be very hard to put these things into practice and production flow. In this paper, we propose the novel method for extremely accurate SRAF placement using an adaptive search algorithm. In this method, the initial position of SRAF is generated by the traditional SRAF placement such as rule based SRAF, and it is adjusted by adaptive algorithm using the evaluation of lithography simulation. This method has three advantages which are preciseness, efficiency and industrial applicability. That is, firstly, the lithography simulation uses actual computational model considering process window, thus our proposed method can precisely adjust the SRAF positions, and consequently we can acquire the best SRAF positions. Secondly, because our adaptive algorithm is based on optimal gradient method, which is very simple algorithm and rectilinear search, the SRAF positions can be adjusted with high efficiency. Thirdly, our proposed method, which utilizes the traditional SRAF placement, is

  18. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  19. A vehicle ABS adaptive sliding-mode control algorithm based on the vehicle velocity estimation and tyre/road friction coefficient estimations

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangwen; Xu, Yong; Pan, Ming; Ren, Fenghua

    2014-04-01

    A sliding-mode observer is designed to estimate the vehicle velocity with the measured vehicle acceleration, the wheel speeds and the braking torques. Based on the Burckhardt tyre model, the extended Kalman filter is designed to estimate the parameters of the Burckhardt model with the estimated vehicle velocity, the measured wheel speeds and the vehicle acceleration. According to the estimated parameters of the Burckhardt tyre model, the tyre/road friction coefficients and the optimal slip ratios are calculated. A vehicle adaptive sliding-mode control (SMC) algorithm is presented with the estimated vehicle velocity, the tyre/road friction coefficients and the optimal slip ratios. And the adjustment method of the sliding-mode gain factors is discussed. Based on the adaptive SMC algorithm, a vehicle's antilock braking system (ABS) control system model is built with the Simulink Toolbox. Under the single-road condition as well as the different road conditions, the performance of the vehicle ABS system is simulated with the vehicle velocity observer, the tyre/road friction coefficient estimator and the adaptive SMC algorithm. The results indicate that the estimated errors of the vehicle velocity and the tyre/road friction coefficients are acceptable and the vehicle ABS adaptive SMC algorithm is effective. So the proposed adaptive SMC algorithm can be used to control the vehicle ABS without the information of the vehicle velocity and the road conditions.

  20. Algorithms for adaptive nonlinear pattern recognition

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric; Key, Gary

    2011-09-01

    In Bayesian pattern recognition research, static classifiers have featured prominently in the literature. A static classifier is essentially based on a static model of input statistics, thereby assuming input ergodicity that is not realistic in practice. Classical Bayesian approaches attempt to circumvent the limitations of static classifiers, which can include brittleness and narrow coverage, by training extensively on a data set that is assumed to cover more than the subtense of expected input. Such assumptions are not realistic for more complex pattern classification tasks, for example, object detection using pattern classification applied to the output of computer vision filters. In contrast, we have developed a two step process, that can render the majority of static classifiers adaptive, such that the tracking of input nonergodicities is supported. Firstly, we developed operations that dynamically insert (or resp. delete) training patterns into (resp. from) the classifier's pattern database, without requiring that the classifier's internal representation of its training database be completely recomputed. Secondly, we developed and applied a pattern replacement algorithm that uses the aforementioned pattern insertion/deletion operations. This algorithm is designed to optimize the pattern database for a given set of performance measures, thereby supporting closed-loop, performance-directed optimization. This paper presents theory and algorithmic approaches for the efficient computation of adaptive linear and nonlinear pattern recognition operators that use our pattern insertion/deletion technology - in particular, tabular nearest-neighbor encoding (TNE) and lattice associative memories (LAMs). Of particular interest is the classification of nonergodic datastreams that have noise corruption with time-varying statistics. The TNE and LAM based classifiers discussed herein have been successfully applied to the computation of object classification in hyperspectral

  1. Adaptive MANET multipath routing algorithm based on the simulated annealing approach.

    PubMed

    Kim, Sungwook

    2014-01-01

    Mobile ad hoc network represents a system of wireless mobile nodes that can freely and dynamically self-organize network topologies without any preexisting communication infrastructure. Due to characteristics like temporary topology and absence of centralized authority, routing is one of the major issues in ad hoc networks. In this paper, a new multipath routing scheme is proposed by employing simulated annealing approach. The proposed metaheuristic approach can achieve greater and reciprocal advantages in a hostile dynamic real world network situation. Therefore, the proposed routing scheme is a powerful method for finding an effective solution into the conflict mobile ad hoc network routing problem. Simulation results indicate that the proposed paradigm adapts best to the variation of dynamic network situations. The average remaining energy, network throughput, packet loss probability, and traffic load distribution are improved by about 10%, 10%, 5%, and 10%, respectively, more than the existing schemes.

  2. New intensity-hue-saturation pan-sharpening method based on texture analysis and genetic algorithm-adaption

    NASA Astrophysics Data System (ADS)

    Masoudi, Rasoul; Kabiri, Peyman

    2014-01-01

    Pansharpening aims to fuse a low-resolution multispectral image with a high-resolution panchromatic image to create a multispectral image with high spatial and spectral resolution. The intensity-hue-saturation (IHS) fusion method transforms an image from RGB space to IHS space. This paper reports a method to improve the spectral resolution of a final multispectral image. The proposed method implies two modifications on the basic IHS method to improve the sharpness of the final image. First, the paper proposes a method based on a genetic algorithm to find the weight of each band of multispectral image in the fusion process. Later on, a texture-based technique is proposed to save the spectral information of the final image with respect to the texture boundaries. Spectral quality metrics in terms of SAM, SID, Q-average, RASE, RMSE, CC, ERGAS and UIQI are used in our experiments. Experimental results on IKONOS and QuickBird data show that the proposed method is more efficient than the original IHS-based fusion approach and some of its extensions, such as IKONOS IHS, edge-adaptive IHS and explicit band coefficient IHS, in preserving spectral information of multispectral images.

  3. Adaptive Trajectory Prediction Algorithm for Climbing Flights

    NASA Technical Reports Server (NTRS)

    Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz

    2012-01-01

    Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.

  4. Adaptive DNA Computing Algorithm by Using PCR and Restriction Enzyme

    NASA Astrophysics Data System (ADS)

    Kon, Yuji; Yabe, Kaoru; Rajaee, Nordiana; Ono, Osamu

    In this paper, we introduce an adaptive DNA computing algorithm by using polymerase chain reaction (PCR) and restriction enzyme. The adaptive algorithm is designed based on Adleman-Lipton paradigm[3] of DNA computing. In this work, however, unlike the Adleman- Lipton architecture a cutting operation has been introduced to the algorithm and the mechanism in which the molecules used by computation were feedback to the next cycle devised. Moreover, the amplification by PCR is performed in the molecule used by feedback and the difference concentration arisen in the base sequence can be used again. By this operation the molecules which serve as a solution candidate can be reduced down and the optimal solution is carried out in the shortest path problem. The validity of the proposed adaptive algorithm is considered with the logical simulation and finally we go on to propose applying adaptive algorithm to the chemical experiment which used the actual DNA molecules for solving an optimal network problem.

  5. Adaptive snakes using the EM algorithm.

    PubMed

    Nascimento, Jacinto C; Marques, Jorge S

    2005-11-01

    Deformable models (e.g., snakes) perform poorly in many image analysis problems. The contour model is attracted by edge points detected in the image. However, many edge points do not belong to the object contour, preventing the active contour from converging toward the object boundary. A new algorithm is proposed in this paper to overcome this difficulty. The algorithm is based on two key ideas. First, edge points are associated in strokes. Second, each stroke is classified as valid (inlier) or invalid (outlier) and a confidence degree is associated to each stroke. The expectation maximization algorithm is used to update the confidence degrees and to estimate the object contour. It is shown that this is equivalent to the use of an adaptive potential function which varies during the optimization process. Valid strokes receive high confidence degrees while confidence degrees of invalid strokes tend to zero during the optimization process. Experimental results are presented to illustrate the performance of the proposed algorithm in the presence of clutter, showing a remarkable robustness.

  6. A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression

    PubMed Central

    Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin

    2012-01-01

    To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.

  7. Adaptive sensor fusion using genetic algorithms

    SciTech Connect

    Fitzgerald, D.S.; Adams, D.G.

    1994-08-01

    Past attempts at sensor fusion have used some form of Boolean logic to combine the sensor information. As an alteniative, an adaptive ``fuzzy`` sensor fusion technique is described in this paper. This technique exploits the robust capabilities of fuzzy logic in the decision process as well as the optimization features of the genetic algorithm. This paper presents a brief background on fuzzy logic and genetic algorithms and how they are used in an online implementation of adaptive sensor fusion.

  8. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  9. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  10. Adaptive Multi-Objective Sub-Pixel Mapping Framework Based on Memetic Algorithm for Hyperspectral Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Y.; Zhang, L.

    2012-07-01

    Sub-pixel mapping technique can specify the location of each class within the pixels based on the assumption of spatial dependence. Traditional sub-pixel mapping algorithms only consider the spatial dependence at the pixel level. The spatial dependence of each sub-pixel is ignored and sub-pixel spatial relation is lost. In this paper, a novel multi-objective sub-pixel mapping framework based on memetic algorithm, namely MSMF, is proposed. In MSMF, the sub-pixel mapping is transformed to a multi-objective optimization problem, which maximizing the spatial dependence index (SDI) and Moran's I, synchronously. Memetic algorithm is utilized to solve the multi-objective problem, which combines global search strategies with local search heuristics. In this framework, the sub-pixel mapping problem can be solved using different evolutionary algorithms and local algorithms. In this paper, memetic algorithm based on clonal selection algorithm (CSA) and random swapping as an example is designed and applied simultaneously in the proposed MSMF. In MSMF, CSA inherits the biologic properties of human immune systems, i.e. clone, mutation, memory, to search the possible sub-pixel mapping solution in the global space. After the exploration based on CSA, the local search based on random swapping is employed to dynamically decide which neighbourhood should be selected to stress exploitation in each generation. In addition, a solution set is used in MSMF to hold and update the obtained non-dominated solutions for multi-objective problem. Experimental results demonstrate that the proposed approach outperform traditional sub-pixel mapping algorithms, and hence provide an effective option for sub-pixel mapping of hyperspectral remote sensing imagery.

  11. AIDA: Adaptive Image Deconvolution Algorithm

    NASA Astrophysics Data System (ADS)

    Hom, Erik; Haase, Sebastian; Marchis, Franck

    2013-10-01

    AIDA is an implementation and extension of the MISTRAL myopic deconvolution method developed by Mugnier et al. (2004) (see J. Opt. Soc. Am. A 21:1841-1854). The MISTRAL approach has been shown to yield object reconstructions with excellent edge preservation and photometric precision when used to process astronomical images. AIDA improves upon the original MISTRAL implementation. AIDA, written in Python, can deconvolve multiple frame data and three-dimensional image stacks encountered in adaptive optics and light microscopic imaging.

  12. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.

  13. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  14. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases.

  15. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  16. Edge preserved enhancement of medical images using adaptive fusion-based denoising by shearlet transform and total variation algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Deep; Anand, Radhey Shyam; Tyagi, Barjeev

    2013-10-01

    Edge preserved enhancement is of great interest in medical images. Noise present in medical images affects the quality, contrast resolution, and most importantly, texture information and can make post-processing difficult also. An enhancement approach using an adaptive fusion algorithm is proposed which utilizes the features of shearlet transform (ST) and total variation (TV) approach. In the proposed method, three different denoised images processed with TV method, shearlet denoising, and edge information recovered from the remnant of the TV method and processed with the ST are fused adaptively. The result of enhanced images processed with the proposed method helps to improve the visibility and detectability of medical images. For the proposed method, different weights are evaluated from the different variance maps of individual denoised image and the edge extracted information from the remnant of the TV approach. The performance of the proposed method is evaluated by conducting various experiments on both the standard images and different medical images such as computed tomography, magnetic resonance, and ultrasound. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges and image details as compared to the others.

  17. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  18. Adaptive computation algorithm for RBF neural network.

    PubMed

    Han, Hong-Gui; Qiao, Jun-Fei

    2012-02-01

    A novel learning algorithm is proposed for nonlinear modelling and identification using radial basis function neural networks. The proposed method simplifies neural network training through the use of an adaptive computation algorithm (ACA). In addition, the convergence of the ACA is analyzed by the Lyapunov criterion. The proposed algorithm offers two important advantages. First, the model performance can be significantly improved through ACA, and the modelling error is uniformly ultimately bounded. Secondly, the proposed ACA can reduce computational cost and accelerate the training speed. The proposed method is then employed to model classical nonlinear system with limit cycle and to identify nonlinear dynamic system, exhibiting the effectiveness of the proposed algorithm. Computational complexity analysis and simulation results demonstrate its effectiveness.

  19. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  20. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  1. A Bayesian Adaptive Basis Algorithm for Single Particle Reconstruction

    PubMed Central

    Kucukelbir, Alp; Sigworth, Fred J.; Tagare, Hemant D.

    2012-01-01

    Traditional single particle reconstruction methods use either the Fourier or the delta function basis to represent the particle density map. This paper proposes a more flexible algorithm that adaptively chooses the basis based on the data. Because the basis adapts to the data, the reconstruction resolution and signal-to-noise ratio (SNR) is improved compared to a reconstruction with a fixed basis. Moreover, the algorithm automatically masks the particle, thereby separating it from the background. This eliminates the need for ad-hoc filtering or masking in the refinement loop. The algorithm is formulated in a Bayesian maximum-a-posteriori framework and uses an efficient optimization algorithm for the maximization. Evaluations using simulated and actual cryogenic electron microscopy data show resolution and SNR improvements as well as the effective masking of particle from background. PMID:22564910

  2. Flight data processing with the F-8 adaptive algorithm

    NASA Technical Reports Server (NTRS)

    Hartmann, G.; Stein, G.; Petersen, K.

    1977-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described

  3. Adaptive scene-based correction algorithm for removal of residual fixed pattern noise in microgrid image data

    NASA Astrophysics Data System (ADS)

    Ratliff, Bradley M.; LeMaster, Daniel A.

    2012-06-01

    Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors. This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.

  4. Adaptive path planning: Algorithm and analysis

    SciTech Connect

    Chen, Pang C.

    1993-03-01

    Path planning has to be fast to support real-time robot programming. Unfortunately, current planning techniques are still too slow to be effective, as they often require several minutes, if not hours of computation. To alleviate this problem, we present a learning algorithm that uses past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions to difficult tasks. From these solutions, an evolving sparse network of useful subgoals is learned to support faster planning. The algorithm is suitable for both stationary and incrementally-changing environments. To analyze our algorithm, we use a previously developed stochastic model that quantifies experience utility. Using this model, we characterize the situations in which the adaptive planner is useful, and provide quantitative bounds to predict its behavior. The results are demonstrated with problems in manipulator planning. Our algorithm and analysis are sufficiently general that they may also be applied to task planning or other planning domains in which experience is useful.

  5. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  6. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  7. An Adaptive Immune Genetic Algorithm for Edge Detection

    NASA Astrophysics Data System (ADS)

    Li, Ying; Bai, Bendu; Zhang, Yanning

    An adaptive immune genetic algorithm (AIGA) based on cost minimization technique method for edge detection is proposed. The proposed AIGA recommends the use of adaptive probabilities of crossover, mutation and immune operation, and a geometric annealing schedule in immune operator to realize the twin goals of maintaining diversity in the population and sustaining the fast convergence rate in solving the complex problems such as edge detection. Furthermore, AIGA can effectively exploit some prior knowledge and information of the local edge structure in the edge image to make vaccines, which results in much better local search ability of AIGA than that of the canonical genetic algorithm. Experimental results on gray-scale images show the proposed algorithm perform well in terms of quality of the final edge image, rate of convergence and robustness to noise.

  8. A new adaptive GMRES algorithm for achieving high accuracy

    SciTech Connect

    Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.

    1996-12-31

    GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.

  9. Synaptic dynamics: linear model and adaptation algorithm.

    PubMed

    Yousefi, Ali; Dibazar, Alireza A; Berger, Theodore W

    2014-08-01

    In this research, temporal processing in brain neural circuitries is addressed by a dynamic model of synaptic connections in which the synapse model accounts for both pre- and post-synaptic processes determining its temporal dynamics and strength. Neurons, which are excited by the post-synaptic potentials of hundred of the synapses, build the computational engine capable of processing dynamic neural stimuli. Temporal dynamics in neural models with dynamic synapses will be analyzed, and learning algorithms for synaptic adaptation of neural networks with hundreds of synaptic connections are proposed. The paper starts by introducing a linear approximate model for the temporal dynamics of synaptic transmission. The proposed linear model substantially simplifies the analysis and training of spiking neural networks. Furthermore, it is capable of replicating the synaptic response of the non-linear facilitation-depression model with an accuracy better than 92.5%. In the second part of the paper, a supervised spike-in-spike-out learning rule for synaptic adaptation in dynamic synapse neural networks (DSNN) is proposed. The proposed learning rule is a biologically plausible process, and it is capable of simultaneously adjusting both pre- and post-synaptic components of individual synapses. The last section of the paper starts with presenting the rigorous analysis of the learning algorithm in a system identification task with hundreds of synaptic connections which confirms the learning algorithm's accuracy, repeatability and scalability. The DSNN is utilized to predict the spiking activity of cortical neurons and pattern recognition tasks. The DSNN model is demonstrated to be a generative model capable of producing different cortical neuron spiking patterns and CA1 Pyramidal neurons recordings. A single-layer DSNN classifier on a benchmark pattern recognition task outperforms a 2-Layer Neural Network and GMM classifiers while having fewer numbers of free parameters and

  10. MLEM algorithm adaptation for improved SPECT scintimammography

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Feiglin, David H.; Lee, Wei; Kunniyur, Vikram R.; Gangal, Kedar R.; Coman, Ioana L.; Lipson, Edward D.; Karczewski, Deborah A.; Thomas, F. Deaver

    2005-04-01

    Standard MLEM and OSEM algorithms used in SPECT Tc-99m sestamibi scintimammography produce hot-spot artifacts (HSA) at the image support peripheries. We investigated a suitable adaptation of MLEM and OSEM algorithms needed to reduce HSA. Patients with suspicious breast lesions were administered 10 mCi of Tc-99m sestamibi and SPECT scans were acquired for patients in prone position with uncompressed breasts. In addition, to simulate breast lesions, some patients were imaged with a number of breast skin markers each containing 1 mCi of Tc-99m. In order to reduce HSA in reconstruction, we removed from the backprojection step the rays that traverse the periphery of the support region on the way to a detector bin, when their path length through this region was shorter than some critical length. Such very short paths result in a very low projection counts contributed to the detector bin, and consequently to overestimation of the activity in the peripheral voxels in the backprojection step-thus creating HSA. We analyzed the breast-lesion contrast and suppression of HSA in the images reconstructed using standard and modified MLEM and OSEM algorithms vs. critical path length (CPL). For CPL >= 0.01 pixel size, we observed improved breast-lesion contrast and lower noise in the reconstructed images, and a very significant reduction of HSA in the maximum intensity projection (MIP) images.

  11. AIDA: An Adaptive Image Deconvolution Algorithm

    NASA Astrophysics Data System (ADS)

    Hom, Erik; Marchis, F.; Lee, T. K.; Haase, S.; Agard, D. A.; Sedat, J. W.

    2007-10-01

    We recently described an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging [Hom et al., J. Opt. Soc. Am. A 24, 1580 (2007)]. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. AIDA includes a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. Here, we present a gallery of results demonstrating the effectiveness of AIDA in processing planetary science images acquired using adaptive-optics systems. Offered as an open-source alternative to MISTRAL, AIDA is available for download and further development at: http://msg.ucsf.edu/AIDA. This work was supported in part by the W. M. Keck Observatory, the National Institutes of Health, NASA, the National Science Foundation Science and Technology Center for Adaptive Optics at UC-Santa Cruz, and the Howard Hughes Medical Institute.

  12. Adaptive Flocking of Robot Swarms: Algorithms and Properties

    NASA Astrophysics Data System (ADS)

    Lee, Geunho; Chong, Nak Young

    This paper presents a distributed approach for adaptive flocking of swarms of mobile robots that enables to navigate autonomously in complex environments populated with obstacles. Based on the observation of the swimming behavior of a school of fish, we propose an integrated algorithm that allows a swarm of robots to navigate in a coordinated manner, split into multiple swarms, or merge with other swarms according to the environment conditions. We prove the convergence of the proposed algorithm using Lyapunov stability theory. We also verify the effectiveness of the algorithm through extensive simulations, where a swarm of robots repeats the process of splitting and merging while passing around multiple stationary and moving obstacles. The simulation results show that the proposed algorithm is scalable, and robust to variations in the sensing capability of individual robots.

  13. Adaptive path planning: Algorithm and analysis

    SciTech Connect

    Chen, Pang C.

    1995-03-01

    To address the need for a fast path planner, we present a learning algorithm that improves path planning by using past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions difficult tasks. From these solutions, an evolving sparse work of useful robot configurations is learned to support faster planning. More generally, the algorithm provides a framework in which a slow but effective planner may be improved both cost-wise and capability-wise by a faster but less effective planner coupled with experience. We analyze algorithm by formalizing the concept of improvability and deriving conditions under which a planner can be improved within the framework. The analysis is based on two stochastic models, one pessimistic (on task complexity), the other randomized (on experience utility). Using these models, we derive quantitative bounds to predict the learning behavior. We use these estimation tools to characterize the situations in which the algorithm is useful and to provide bounds on the training time. In particular, we show how to predict the maximum achievable speedup. Additionally, our analysis techniques are elementary and should be useful for studying other types of probabilistic learning as well.

  14. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  15. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM

    SciTech Connect

    Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.

  16. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  17. Solving the chemical master equation by a fast adaptive finite state projection based on the stochastic simulation algorithm.

    PubMed

    Sidje, R B; Vo, H D

    2015-11-01

    The mathematical framework of the chemical master equation (CME) uses a Markov chain to model the biochemical reactions that are taking place within a biological cell. Computing the transient probability distribution of this Markov chain allows us to track the composition of molecules inside the cell over time, with important practical applications in a number of areas such as molecular biology or medicine. However the CME is typically difficult to solve, since the state space involved can be very large or even countably infinite. We present a novel way of using the stochastic simulation algorithm (SSA) to reduce the size of the finite state projection (FSP) method. Numerical experiments that demonstrate the effectiveness of the reduction are included.

  18. Adaptive Load-Balancing Algorithms Using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In a distributed-computing environment, it is important to ensure that the processor workloads are adequately balanced. Among numerous load-balancing algorithms, a unique approach due to Dam and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three novel SBN-based load-balancing algorithms, and implement them on an SP2. A thorough experimental study with Poisson-distributed synthetic loads demonstrates that these algorithms are very effective in balancing system load while minimizing processor idle time. They also compare favorably with several other existing load-balancing techniques. Additional experiments performed with real data demonstrate that the SBN approach is effective in adaptive computational science and engineering applications where dynamic load balancing is extremely crucial.

  19. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations. PMID:25594982

  20. An efficient sampling algorithm with adaptations for Bayesian variable selection.

    PubMed

    Araki, Takamitsu; Ikeda, Kazushi; Akaho, Shotaro

    2015-01-01

    In Bayesian variable selection, indicator model selection (IMS) is a class of well-known sampling algorithms, which has been used in various models. The IMS is a class of methods that uses pseudo-priors and it contains specific methods such as Gibbs variable selection (GVS) and Kuo and Mallick's (KM) method. However, the efficiency of the IMS strongly depends on the parameters of a proposal distribution and the pseudo-priors. Specifically, the GVS determines their parameters based on a pilot run for a full model and the KM method sets their parameters as those of priors, which often leads to slow mixings of them. In this paper, we propose an algorithm that adapts the parameters of the IMS during running. The parameters obtained on the fly provide an appropriate proposal distribution and pseudo-priors, which improve the mixing of the algorithm. We also prove the convergence theorem of the proposed algorithm, and confirm that the algorithm is more efficient than the conventional algorithms by experiments of the Bayesian variable selection.

  1. Discrete-time minimal control synthesis adaptive algorithm

    NASA Astrophysics Data System (ADS)

    di Bernardo, M.; di Gennaro, F.; Olm, J. M.; Santini, S.

    2010-12-01

    This article proposes a discrete-time Minimal Control Synthesis (MCS) algorithm for a class of single-input single-output discrete-time systems written in controllable canonical form. As it happens with the continuous-time MCS strategy, the algorithm arises from the family of hyperstability-based discrete-time model reference adaptive controllers introduced in (Landau, Y. (1979), Adaptive Control: The Model Reference Approach, New York: Marcel Dekker, Inc.) and is able to ensure tracking of the states of a given reference model with minimal knowledge about the plant. The control design shows robustness to parameter uncertainties, slow parameter variation and matched disturbances. Furthermore, it is proved that the proposed discrete-time MCS algorithm can be used to control discretised continuous-time plants with the same performance features. Contrary to previous discrete-time implementations of the continuous-time MCS algorithm, here a formal proof of asymptotic stability is given for generic n-dimensional plants in controllable canonical form. The theoretical approach is validated by means of simulation results.

  2. Adaptive firefly algorithm: parameter analysis and its application.

    PubMed

    Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise. PMID:25397812

  3. Adaptive firefly algorithm: parameter analysis and its application.

    PubMed

    Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise.

  4. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  5. Multi-element array signal reconstruction with adaptive least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1992-01-01

    Two versions of the adaptive least-squares algorithm are presented for combining signals from multiple feeds placed in the focal plane of a mechanical antenna whose reflector surface is distorted due to various deformations. Coherent signal combining techniques based on the adaptive least-squares algorithm are examined for nearly optimally and adaptively combining the outputs of the feeds. The performance of the two versions is evaluated by simulations. It is demonstrated for the example considered that both of the adaptive least-squares algorithms are capable of offsetting most of the loss in the antenna gain incurred due to reflector surface deformations.

  6. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  7. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  8. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm. PMID:25265622

  9. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  10. Adaptivity and smart algorithms for fluid-structure interaction

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley

    1990-01-01

    This paper reviews new approaches in CFD which have the potential for significantly increasing current capabilities of modeling complex flow phenomena and of treating difficult problems in fluid-structure interaction. These approaches are based on the notions of adaptive methods and smart algorithms, which use instantaneous measures of the quality and other features of the numerical flowfields as a basis for making changes in the structure of the computational grid and of algorithms designed to function on the grid. The application of these new techniques to several problem classes are addressed, including problems with moving boundaries, fluid-structure interaction in high-speed turbine flows, flow in domains with receding boundaries, and related problems.

  11. Characterization of atmospheric contaminant sources using adaptive evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Cervone, Guido; Franzese, Pasquale; Grajdeanu, Adrian

    2010-10-01

    The characteristics of an unknown source of emissions in the atmosphere are identified using an Adaptive Evolutionary Strategy (AES) methodology based on ground concentration measurements and a Gaussian plume model. The AES methodology selects an initial set of source characteristics including position, size, mass emission rate, and wind direction, from which a forward dispersion simulation is performed. The error between the simulated concentrations from the tentative source and the observed ground measurements is calculated. Then the AES algorithm prescribes the next tentative set of source characteristics. The iteration proceeds towards minimum error, corresponding to convergence towards the real source. The proposed methodology was used to identify the source characteristics of 12 releases from the Prairie Grass field experiment of dispersion, two for each atmospheric stability class, ranging from very unstable to stable atmosphere. The AES algorithm was found to have advantages over a simple canonical ES and a Monte Carlo (MC) method which were used as benchmarks.

  12. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  13. Adaptive Routing Algorithm in Wireless Communication Networks Using Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Wu, Qinghua; Cai, Zhihua

    At present, mobile communications traffic routing designs are complicated because there are more systems inter-connecting to one another. For example, Mobile Communication in the wireless communication networks has two routing design conditions to consider, i.e. the circuit switching and the packet switching. The problem in the Packet Switching routing design is its use of high-speed transmission link and its dynamic routing nature. In this paper, Evolutionary Algorithms is used to determine the best solution and the shortest communication paths. We developed a Genetic Optimization Process that can help network planners solving the best solutions or the best paths of routing table in wireless communication networks are easily and quickly. From the experiment results can be noted that the evolutionary algorithm not only gets good solutions, but also a more predictable running time when compared to sequential genetic algorithm.

  14. An adaptive inverse kinematics algorithm for robot manipulators

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  15. Adaptive Rule Based Fetal QRS Complex Detection Using Hilbert Transform

    PubMed Central

    Ulusar, Umit D.; Govindan, R.B.; Wilson, James D.; Lowery, Curtis L.; Preissl, Hubert; Eswaran, Hari

    2010-01-01

    In this paper we introduce an adaptive rule based QRS detection algorithm using the Hilbert transform (adHQRS) for fetal magnetocardiography processing. Hilbert transform is used to combine multiple channel measurements and the adaptive rule based decision process is used to eliminate spurious beats. The algorithm has been tested with a large number of datasets and promising results were obtained. PMID:19964648

  16. Assessing the Reliability of Computer Adaptive Testing Branching Algorithms Using HyperCAT.

    ERIC Educational Resources Information Center

    Shermis, Mark D.; And Others

    The reliability of four branching algorithms commonly used in computer adaptive testing (CAT) was examined. These algorithms were: (1) maximum likelihood (MLE); (2) Bayesian; (3) modal Bayesian; and (4) crossover. Sixty-eight undergraduate college students were randomly assigned to one of the four conditions using the HyperCard-based CAT program,…

  17. An adaptive correspondence algorithm for modeling scenes with strong interreflections.

    PubMed

    Xu, Yi; Aliaga, Daniel G

    2009-01-01

    Modeling real-world scenes, beyond diffuse objects, plays an important role in computer graphics, virtual reality, and other commercial applications. One active approach is projecting binary patterns in order to obtain correspondence and reconstruct a densely sampled 3D model. In such structured-light systems, determining whether a pixel is directly illuminated by the projector is essential to decoding the patterns. When a scene has abundant indirect light, this process is especially difficult. In this paper, we present a robust pixel classification algorithm for this purpose. Our method correctly establishes the lower and upper bounds of the possible intensity values of an illuminated pixel and of a non-illuminated pixel. Based on the two intervals, our method classifies a pixel by determining whether its intensity is within one interval but not in the other. Our method performs better than standard method due to the fact that it avoids gross errors during decoding process caused by strong inter-reflections. For the remaining uncertain pixels, we apply an iterative algorithm to reduce the inter-reflection within the scene. Thus, more points can be decoded and reconstructed after each iteration. Moreover, the iterative algorithm is carried out in an adaptive fashion for fast convergence.

  18. A novel hybrid self-adaptive bat algorithm.

    PubMed

    Fister, Iztok; Fong, Simon; Brest, Janez; Fister, Iztok

    2014-01-01

    Nature-inspired algorithms attract many researchers worldwide for solving the hardest optimization problems. One of the newest members of this extensive family is the bat algorithm. To date, many variants of this algorithm have emerged for solving continuous as well as combinatorial problems. One of the more promising variants, a self-adaptive bat algorithm, has recently been proposed that enables a self-adaptation of its control parameters. In this paper, we have hybridized this algorithm using different DE strategies and applied these as a local search heuristics for improving the current best solution directing the swarm of a solution towards the better regions within a search space. The results of exhaustive experiments were promising and have encouraged us to invest more efforts into developing in this direction.

  19. A Novel Hybrid Self-Adaptive Bat Algorithm

    PubMed Central

    Fister, Iztok; Brest, Janez

    2014-01-01

    Nature-inspired algorithms attract many researchers worldwide for solving the hardest optimization problems. One of the newest members of this extensive family is the bat algorithm. To date, many variants of this algorithm have emerged for solving continuous as well as combinatorial problems. One of the more promising variants, a self-adaptive bat algorithm, has recently been proposed that enables a self-adaptation of its control parameters. In this paper, we have hybridized this algorithm using different DE strategies and applied these as a local search heuristics for improving the current best solution directing the swarm of a solution towards the better regions within a search space. The results of exhaustive experiments were promising and have encouraged us to invest more efforts into developing in this direction. PMID:25187904

  20. An adaptive algorithm for low contrast infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi

    2013-08-01

    An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex

  1. Adaptive algorithm for cloud cover estimation from all-sky images over the sea

    NASA Astrophysics Data System (ADS)

    Krinitskiy, M. A.; Sinitsyn, A. V.

    2016-05-01

    A new algorithm for cloud cover estimation has been formulated and developed based on the synthetic control index, called the grayness rate index, and an additional algorithm step of adaptive filtering of the Mie scattering contribution. A setup for automated cloud cover estimation has been designed, assembled, and tested under field conditions. The results shows a significant advantage of the new algorithm over currently commonly used procedures.

  2. Region Adaptive Color Demosaicing Algorithm Using Color Constancy

    NASA Astrophysics Data System (ADS)

    Kim, Chang Won; Oh, Hyun Mook; Yoo, Du Sic; Kang, Moon Gi

    2010-12-01

    This paper proposes a novel way of combining color demosaicing and the auto white balance (AWB) method, which are important parts of image processing. Performance of the AWB is generally affected by demosaicing results because most AWB algorithms are performed posterior to color demosaicing. In this paper, in order to increase the performance and efficiency of the AWB algorithm, the color constancy problem is examined during the color demosaicing step. Initial estimates of the directional luminance and chrominance values are defined for estimating edge direction and calculating the AWB gain. In order to prevent color failure in conventional edge-based AWB methods, we propose a modified edge-based AWB method that used a predefined achromatic region. The estimation of edge direction is performed region adaptively by using the local statistics of the initial estimates of the luminance and chrominance information. Simulated and real Bayer color filter array (CFA) data are used to evaluate the performance of the proposed method. When compared to conventional methods, the proposed method shows significant improvements in terms of visual and numerical criteria.

  3. Adaptive-mesh algorithms for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Roe, Philip L.; Quirk, James

    1993-01-01

    The basic goal of adaptive-mesh algorithms is to distribute computational resources wisely by increasing the resolution of 'important' regions of the flow and decreasing the resolution of regions that are less important. While this goal is one that is worthwhile, implementing schemes that have this degree of sophistication remains more of an art than a science. In this paper, the basic pieces of adaptive-mesh algorithms are described and some of the possible ways to implement them are discussed and compared. These basic pieces are the data structure to be used, the generation of an initial mesh, the criterion to be used to adapt the mesh to the solution, and the flow-solver algorithm on the resulting mesh. Each of these is discussed, with particular emphasis on methods suitable for the computation of compressible flows.

  4. Design of infrasound-detection system via adaptive LMSTDE algorithm

    NASA Technical Reports Server (NTRS)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  5. Laser Raman detection of platelets for early and differential diagnosis of Alzheimer’s disease based on an adaptive Gaussian process classification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Yusheng; Du, Z. W.; Yang, Y. J.; Chen, P.; Tian, Q.; Shang, X. L.; Liu, Z. C.; Yao, X. Q.; Wang, J. Z.; Wang, X. H.; Cheng, Y.; Peng, J.; Shen, A. G.; Hu, J. M.

    2013-04-01

    Early and differential diagnosis of Alzheimer’s disease (AD) has puzzled many clinicians. In this work, laser Raman spectroscopy (LRS) was developed to diagnose AD from platelet samples from AD transgenic mice and non-transgenic controls of different ages. An adaptive Gaussian process (GP) classification algorithm was used to re-establish the classification models of early AD, advanced AD and the control group with just two features and the capacity for noise reduction. Compared with the previous multilayer perceptron network method, the GP showed much better classification performance with the same feature set. Besides, spectra of platelets isolated from AD and Parkinson’s disease (PD) mice were also discriminated. Spectral data from 4 month AD (n = 39) and 12 month AD (n = 104) platelets, as well as control data (n = 135), were collected. Prospective application of the algorithm to the data set resulted in a sensitivity of 80%, a specificity of about 100% and a Matthews correlation coefficient of 0.81. Samples from PD (n = 120) platelets were also collected for differentiation from 12 month AD. The results suggest that platelet LRS detection analysis with the GP appears to be an easier and more accurate method than current ones for early and differential diagnosis of AD.

  6. Adaptive improved natural gradient algorithm for blind source separation.

    PubMed

    Liu, Jian-Qiang; Feng, Da-Zheng; Zhang, Wei-Wei

    2009-03-01

    We propose an adaptive improved natural gradient algorithm for blind separation of independent sources. First, inspired by the well-known backpropagation algorithm, we incorporate a momentum term into the natural gradient learning process to accelerate the convergence rate and improve the stability. Then an estimation function for the adaptation of the separation model is obtained to adaptively control a step-size parameter and a momentum factor. The proposed natural gradient algorithm with variable step-size parameter and variable momentum factor is therefore particularly well suited to blind source separation in a time-varying environment, such as an abruptly changing mixing matrix or signal power. The expected improvement in the convergence speed, stability, and tracking ability of the proposed algorithm is demonstrated by extensive simulation results in both time-invariant and time-varying environments. The ability of the proposed algorithm to separate extremely weak or badly scaled sources is also verified. In addition, simulation results show that the proposed algorithm is suitable for separating mixtures of many sources (e.g., the number of sources is 10) in the complete case.

  7. Data-adaptive algorithms for calling alleles in repeat polymorphisms.

    PubMed

    Stoughton, R; Bumgarner, R; Frederick, W J; McIndoe, R A

    1997-01-01

    Data-adaptive algorithms are presented for separating overlapping signatures of heterozygotic allele pairs in electrophoresis data. Application is demonstrated for human microsatellite CA-repeat polymorphisms in LiCor 4000 and ABI 373 data. The algorithms allow overlapping alleles to be called correctly in almost every case where a trained observer could do so, and provide a fast automated objective alternative to human reading of the gels. The algorithm also supplies an indication of confidence level which can be used to flag marginal cases for verification by eye, or as input to later stages of statistical analysis. PMID:9059812

  8. TH-E-BRE-11: Adaptive-Beamlet Based Finite Size Pencil Beam (AB-FSPB) Dose Calculation Algorithm for Independent Verification of IMRT and VMAT

    SciTech Connect

    Park, C; Arhjoul, L; Yan, G; Lu, B; Li, J; Liu, C

    2014-06-15

    Purpose: In current IMRT and VMAT settings, the use of sophisticated dose calculation procedure is inevitable in order to account complex treatment field created by MLCs. As a consequence, independent volumetric dose verification procedure is time consuming which affect the efficiency of clinical workflow. In this study, the authors present an efficient Pencil Beam based dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of Finite Size Pencil Beam (FSPB) algorithm is proportional to the number of infinitesimal identical beamlets that constitute the arbitrary field shape. In AB-FSPB, the dose distribution from each beamlet is mathematically modelled such that the sizes of beamlets to represent arbitrary field shape are no longer needed to be infinitesimal nor identical. In consequence, it is possible to represent arbitrary field shape with combinations of different sized and minimal number of beamlets. Results: On comparing FSPB with AB-FSPB, the complexity of the algorithm has been reduced significantly. For 25 by 25 cm2 squared shaped field, 1 beamlet of 25 by 25 cm2 was sufficient to calculate dose in AB-FSPB, whereas in conventional FSPB, minimum 2500 beamlets of 0.5 by 0.5 cm2 size were needed to calculate dose that was comparable to the Result computed from Treatment Planning System (TPS). The algorithm was also found to be GPU compatible to maximize its computational speed. On calculating 3D dose of IMRT (∼30 control points) and VMAT plan (∼90 control points) with grid size 2.0 mm (200 by 200 by 200), the dose could be computed within 3∼5 and 10∼15 seconds. Conclusion: Authors have developed an efficient Pencil Beam type dose calculation algorithm called AB-FSPB. The fast computation nature along with GPU compatibility has shown performance better than conventional FSPB. This completely enables the implantation of AB-FSPB in the clinical environment for independent

  9. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation.

  10. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation. PMID:21476661

  11. Self-adaptive algorithm for segmenting skin regions

    NASA Astrophysics Data System (ADS)

    Kawulok, Michal; Kawulok, Jolanta; Nalepa, Jakub; Smolka, Bogdan

    2014-12-01

    In this paper, we introduce a new self-adaptive algorithm for segmenting human skin regions in color images. Skin detection and segmentation is an active research topic, and many solutions have been proposed so far, especially concerning skin tone modeling in various color spaces. Such models are used for pixel-based classification, but its accuracy is limited due to high variance and low specificity of human skin color. In many works, skin model adaptation and spatial analysis were reported to improve the final segmentation outcome; however, little attention has been paid so far to the possibilities of combining these two improvement directions. Our contribution lies in learning a local skin color model on the fly, which is subsequently applied to the image to determine the seeds for the spatial analysis. Furthermore, we also take advantage of textural features for computing local propagation costs that are used in the distance transform. The results of an extensive experimental study confirmed that the new method is highly competitive, especially for extracting the hand regions in color images.

  12. Adaptive clustering algorithm for community detection in complex networks.

    PubMed

    Ye, Zhenqing; Hu, Songnian; Yu, Jun

    2008-10-01

    Community structure is common in various real-world networks; methods or algorithms for detecting such communities in complex networks have attracted great attention in recent years. We introduced a different adaptive clustering algorithm capable of extracting modules from complex networks with considerable accuracy and robustness. In this approach, each node in a network acts as an autonomous agent demonstrating flocking behavior where vertices always travel toward their preferable neighboring groups. An optimal modular structure can emerge from a collection of these active nodes during a self-organization process where vertices constantly regroup. In addition, we show that our algorithm appears advantageous over other competing methods (e.g., the Newman-fast algorithm) through intensive evaluation. The applications in three real-world networks demonstrate the superiority of our algorithm to find communities that are parallel with the appropriate organization in reality. PMID:18999501

  13. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm. PMID:27610308

  14. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  15. An Adaptive Tradeoff Algorithm for Multi-issue SLA Negotiation

    NASA Astrophysics Data System (ADS)

    Son, Seokho; Sim, Kwang Mong

    Since participants in a Cloud may be independent bodies, mechanisms are necessary for resolving different preferences in leasing Cloud services. Whereas there are currently mechanisms that support service-level agreement negotiation, there is little or no negotiation support for concurrent price and timeslot for Cloud service reservations. For the concurrent price and timeslot negotiation, a tradeoff algorithm to generate and evaluate a proposal which consists of price and timeslot proposal is necessary. The contribution of this work is thus to design an adaptive tradeoff algorithm for multi-issue negotiation mechanism. The tradeoff algorithm referred to as "adaptive burst mode" is especially designed to increase negotiation speed and total utility and to reduce computational load by adaptively generating concurrent set of proposals. The empirical results obtained from simulations carried out using a testbed suggest that due to the concurrent price and timeslot negotiation mechanism with adaptive tradeoff algorithm: 1) both agents achieve the best performance in terms of negotiation speed and utility; 2) the number of evaluations of each proposal is comparatively lower than previous scheme (burst-N).

  16. Adaptation algorithms for 2-D feedforward neural networks.

    PubMed

    Kaczorek, T

    1995-01-01

    The generalized weight adaptation algorithms presented by J.G. Kuschewski et al. (1993) and by S.H. Zak and H.J. Sira-Ramirez (1990) are extended for 2-D madaline and 2-D two-layer feedforward neural nets (FNNs).

  17. Formulation and implementation of nonstationary adaptive estimation algorithm with applications to air-data reconstruction

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.

    1985-01-01

    The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the space shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.

  18. An adaptive grid algorithm for one-dimensional nonlinear equations

    NASA Technical Reports Server (NTRS)

    Gutierrez, William E.; Hills, Richard G.

    1990-01-01

    Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and

  19. Adaptive sensor array algorithm for structural health monitoring of helmet

    NASA Astrophysics Data System (ADS)

    Zou, Xiaotian; Tian, Ye; Wu, Nan; Sun, Kai; Wang, Xingwei

    2011-04-01

    The adaptive neural network is a standard technique used in nonlinear system estimation and learning applications for dynamic models. In this paper, we introduced an adaptive sensor fusion algorithm for a helmet structure health monitoring system. The helmet structure health monitoring system is used to study the effects of ballistic/blast events on the helmet and human skull. Installed inside the helmet system, there is an optical fiber pressure sensors array. After implementing the adaptive estimation algorithm into helmet system, a dynamic model for the sensor array has been developed. The dynamic response characteristics of the sensor network are estimated from the pressure data by applying an adaptive control algorithm using artificial neural network. With the estimated parameters and position data from the dynamic model, the pressure distribution of the whole helmet can be calculated following the Bazier Surface interpolation method. The distribution pattern inside the helmet will be very helpful for improving helmet design to provide better protection to soldiers from head injuries.

  20. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  1. An adaptive mesh refinement algorithm for the discrete ordinates method

    SciTech Connect

    Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.

    1996-03-01

    The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.

  2. Fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1986-01-01

    A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

  3. Sequential Insertion Heuristic with Adaptive Bee Colony Optimisation Algorithm for Vehicle Routing Problem with Time Windows.

    PubMed

    Jawarneh, Sana; Abdullah, Salwani

    2015-01-01

    This paper presents a bee colony optimisation (BCO) algorithm to tackle the vehicle routing problem with time window (VRPTW). The VRPTW involves recovering an ideal set of routes for a fleet of vehicles serving a defined number of customers. The BCO algorithm is a population-based algorithm that mimics the social communication patterns of honeybees in solving problems. The performance of the BCO algorithm is dependent on its parameters, so the online (self-adaptive) parameter tuning strategy is used to improve its effectiveness and robustness. Compared with the basic BCO, the adaptive BCO performs better. Diversification is crucial to the performance of the population-based algorithm, but the initial population in the BCO algorithm is generated using a greedy heuristic, which has insufficient diversification. Therefore the ways in which the sequential insertion heuristic (SIH) for the initial population drives the population toward improved solutions are examined. Experimental comparisons indicate that the proposed adaptive BCO-SIH algorithm works well across all instances and is able to obtain 11 best results in comparison with the best-known results in the literature when tested on Solomon's 56 VRPTW 100 customer instances. Also, a statistical test shows that there is a significant difference between the results.

  4. Sequential Insertion Heuristic with Adaptive Bee Colony Optimisation Algorithm for Vehicle Routing Problem with Time Windows.

    PubMed

    Jawarneh, Sana; Abdullah, Salwani

    2015-01-01

    This paper presents a bee colony optimisation (BCO) algorithm to tackle the vehicle routing problem with time window (VRPTW). The VRPTW involves recovering an ideal set of routes for a fleet of vehicles serving a defined number of customers. The BCO algorithm is a population-based algorithm that mimics the social communication patterns of honeybees in solving problems. The performance of the BCO algorithm is dependent on its parameters, so the online (self-adaptive) parameter tuning strategy is used to improve its effectiveness and robustness. Compared with the basic BCO, the adaptive BCO performs better. Diversification is crucial to the performance of the population-based algorithm, but the initial population in the BCO algorithm is generated using a greedy heuristic, which has insufficient diversification. Therefore the ways in which the sequential insertion heuristic (SIH) for the initial population drives the population toward improved solutions are examined. Experimental comparisons indicate that the proposed adaptive BCO-SIH algorithm works well across all instances and is able to obtain 11 best results in comparison with the best-known results in the literature when tested on Solomon's 56 VRPTW 100 customer instances. Also, a statistical test shows that there is a significant difference between the results. PMID:26132158

  5. Sequential Insertion Heuristic with Adaptive Bee Colony Optimisation Algorithm for Vehicle Routing Problem with Time Windows

    PubMed Central

    Jawarneh, Sana; Abdullah, Salwani

    2015-01-01

    This paper presents a bee colony optimisation (BCO) algorithm to tackle the vehicle routing problem with time window (VRPTW). The VRPTW involves recovering an ideal set of routes for a fleet of vehicles serving a defined number of customers. The BCO algorithm is a population-based algorithm that mimics the social communication patterns of honeybees in solving problems. The performance of the BCO algorithm is dependent on its parameters, so the online (self-adaptive) parameter tuning strategy is used to improve its effectiveness and robustness. Compared with the basic BCO, the adaptive BCO performs better. Diversification is crucial to the performance of the population-based algorithm, but the initial population in the BCO algorithm is generated using a greedy heuristic, which has insufficient diversification. Therefore the ways in which the sequential insertion heuristic (SIH) for the initial population drives the population toward improved solutions are examined. Experimental comparisons indicate that the proposed adaptive BCO-SIH algorithm works well across all instances and is able to obtain 11 best results in comparison with the best-known results in the literature when tested on Solomon’s 56 VRPTW 100 customer instances. Also, a statistical test shows that there is a significant difference between the results. PMID:26132158

  6. Study of the Algorithm of Backtracking Decoupling and Adaptive Extended Kalman Filter Based on the Quaternion Expanded to the State Variable for Underwater Glider Navigation

    PubMed Central

    Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping

    2014-01-01

    High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method. PMID:25479331

  7. Study of the algorithm of backtracking decoupling and adaptive extended Kalman filter based on the quaternion expanded to the state variable for underwater glider navigation.

    PubMed

    Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping

    2014-01-01

    High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method. PMID:25479331

  8. Study of the algorithm of backtracking decoupling and adaptive extended Kalman filter based on the quaternion expanded to the state variable for underwater glider navigation.

    PubMed

    Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping

    2014-12-03

    High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method.

  9. The "Juggler" algorithm: a hybrid deformable image registration algorithm for adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Xia, Junyi; Chen, Yunmei; Samant, Sanjiv S.

    2007-03-01

    Fast deformable registration can potentially facilitate the clinical implementation of adaptive radiation therapy (ART), which allows for daily organ deformations not accounted for in radiotherapy treatment planning, which typically utilizes a static organ model, to be incorporated into the fractionated treatment. Existing deformable registration algorithms typically utilize a specific diffusion model, and require a large number of iterations to achieve convergence. This limits the online applications of deformable image registration for clinical radiotherapy, such as daily patient setup variations involving organ deformation, where high registration precision is required. We propose a hybrid algorithm, the "Juggler", based on a multi-diffusion model to achieve fast convergence. The Juggler achieves fast convergence by applying two different diffusion models: i) one being optimized quickly for matching high gradient features, i.e. bony anatomies; and ii) the other being optimized for further matching low gradient features, i.e. soft tissue. The regulation of these 2 competing criteria is achieved using a threshold of a similarity measure, such as cross correlation or mutual information. A multi-resolution scheme was applied for faster convergence involving large deformations. Comparisons of the Juggler algorithm were carried out with demons method, accelerated demons method, and free-form deformable registration using 4D CT lung imaging from 5 patients. Based on comparisons of difference images and similarity measure computations, the Juggler produced a superior registration result. It achieved the desired convergence within 30 iterations, and typically required <90sec to register two 3D image sets of size 256×256×40 using a 3.2 GHz PC. This hybrid registration strategy successfully incorporates the benefits of different diffusion models into a single unified model.

  10. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  11. [Adaptive algorithm for automatic measurement of retinal vascular diameter].

    PubMed

    Münch, K; Vilser, W; Senff, I

    1995-11-01

    A new adaptive computer-aided method for the measurement of blood vessel diameters has been developed. Within areas of interest in the image, the algorithm detects, line-wise, the edges of the vessels, which are then used for image-wise approximation and noise filtration. A high level of adaptivity with respect to numerous measuring parameters ensures its use in a wide range of applications. Thus, it has been shown to significantly improve clinically relevant reproducibility in the area of follow-up observations. The standard deviation for vessel diameter was (2.2 +/- 0.7)% in the case of arteries and (1.8 +/- 0.5)% in the case of veins. Testing the algorithm in images of poor quality revealed its high level of reliability and sensitivity.

  12. An adaptive phase alignment algorithm for cartesian feedback loops

    NASA Astrophysics Data System (ADS)

    Gimeno-Martin, A.; Pardo-Martin, J.; Ortega-Gonzalez, F.

    2010-01-01

    An adaptive algorithm to correct phase misalignments in Cartesian feedback linearization loops for power amplifiers has been presented. It yields an error smaller than 0.035 rad between forward and feedback loop signals once convergence is reached. Because this algorithm enables a feedback system to process forward and feedback samples belonging to almost the same algorithm iteration, it is suitable to improve the performance not only of power amplifiers but also any other digital feedback system for communications systems and circuits such as all digital phase locked loops. Synchronizing forward and feedback paths of Cartesian feedback loops takes a small period of time after the system starts up. The phase alignment algorithm needs to converge before the feedback Cartesian loop can start its ideal behavior. However, once the steady state is reached, both paths can be considered synchronized, and the Cartesian feedback loop will only depend on the loop parameters (open-loop gain, loop bandwidth, etc.). It means that the linearization process will also depend only on these parameters since the misalignment effect disappears. Therefore, this algorithm relieves the power amplifier linearizer circuit design of any task required for solving phase misalignment effects inherent to Cartesian feedback systems. Furthermore, when a feedback Cartesian loop has to be designed, the designer can consider that forward and feedback paths are synchronized, since the phase alignment algorithm will do this task. This will reduce the simulation complexity. Then, all efforts are applied to determining the suitable loop parameters that will make the linearization process more efficient.

  13. Analysis of hypersonic aircraft inlets using flow adaptive mesh algorithms

    NASA Astrophysics Data System (ADS)

    Neaves, Michael Dean

    The numerical investigation into the dynamics of unsteady inlet flowfields is applied to a three-dimensional scramjet inlet-isolator-diffuser geometry designed for hypersonic type applications. The Reynolds-Averaged Navier-Stokes equations are integrated in time using a subiterating, time-accurate implicit algorithm. Inviscid fluxes are calculated using the Low Diffusion Flux Splitting Scheme of Edwards. A modified version of the dynamic solution-adaptive point movement algorithm of Benson and McRae is used in a coupled mode to dynamically resolve the features of the flow by enhancing the spatial accuracy of the simulations. The unsteady mesh terms are incorporated into the flow solver via the inviscid fluxes. The dynamic solution-adaptive grid algorithm of Benson and McRae is modified to improve orthogonality at the boundaries to ensure accurate application of boundary conditions and properly resolve turbulent boundary layers. Shock tube simulations are performed to ascertain the effectiveness of the algorithm for unsteady flow situations on fixed and moving grids. Unstarts due to a combustor and freestream angle of attack perturbations are simulated in a three-dimensional inlet-isolator-diffuser configuration.

  14. Data-adaptive Shrinkage via the Hyperpenalized EM Algorithm

    PubMed Central

    Boonstra, Philip S.; Taylor, Jeremy M. G.; Mukherjee, Bhramar

    2015-01-01

    We propose an extension of the expectation-maximization (EM) algorithm, called the hyperpenalized EM (HEM) algorithm, that maximizes a penalized log-likelihood, for which some data are missing or unavailable, using a data-adaptive estimate of the penalty parameter. This is potentially useful in applications for which the analyst is unable or unwilling to choose a single value of a penalty parameter but instead can posit a plausible range of values. The HEM algorithm is conceptually straightforward and also very effective, and we demonstrate its utility in the analysis of a genomic data set. Gene expression measurements and clinical covariates were used to predict survival time. However, many survival times are censored, and some observations only contain expression measurements derived from a different assay, which together constitute a difficult missing data problem. It is desired to shrink the genomic contribution in a data-adaptive way. The HEM algorithm successfully handles both the missing data and shrinkage aspects of the problem. PMID:26834856

  15. Generalized pattern search algorithms with adaptive precision function evaluations

    SciTech Connect

    Polak, Elijah; Wetter, Michael

    2003-05-14

    In the literature on generalized pattern search algorithms, convergence to a stationary point of a once continuously differentiable cost function is established under the assumption that the cost function can be evaluated exactly. However, there is a large class of engineering problems where the numerical evaluation of the cost function involves the solution of systems of differential algebraic equations. Since the termination criteria of the numerical solvers often depend on the design parameters, computer code for solving these systems usually defines a numerical approximation to the cost function that is discontinuous with respect to the design parameters. Standard generalized pattern search algorithms have been applied heuristically to such problems, but no convergence properties have been stated. In this paper we extend a class of generalized pattern search algorithms to a form that uses adaptive precision approximations to the cost function. These numerical approximations need not define a continuous function. Our algorithms can be used for solving linearly constrained problems with cost functions that are at least locally Lipschitz continuous. Assuming that the cost function is smooth, we prove that our algorithms converge to a stationary point. Under the weaker assumption that the cost function is only locally Lipschitz continuous, we show that our algorithms converge to points at which the Clarke generalized directional derivatives are nonnegative in predefined directions. An important feature of our adaptive precision scheme is the use of coarse approximations in the early iterations, with the approximation precision controlled by a test. Such an approach leads to substantial time savings in minimizing computationally expensive functions.

  16. A local adaptive discretization algorithm for Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Spreng, Fabian; Schnabel, Dirk; Mueller, Alexandra; Eberhard, Peter

    2014-06-01

    In this paper, an extension to the Smoothed Particle Hydrodynamics (SPH) method is proposed that allows for an adaptation of the discretization level of a simulated continuum at runtime. By combining a local adaptive refinement technique with a newly developed coarsening algorithm, one is able to improve the accuracy of the simulation results while reducing the required computational cost at the same time. For this purpose, the number of particles is, on the one hand, adaptively increased in critical areas of a simulation model. Typically, these are areas that show a relatively low particle density and high gradients in stress or temperature. On the other hand, the number of SPH particles is decreased for domains with a high particle density and low gradients. Besides a brief introduction to the basic principle of the SPH discretization method, the extensions to the original formulation providing such a local adaptive refinement and coarsening of the modeled structure are presented in this paper. After having introduced its theoretical background, the applicability of the enhanced formulation, as well as the benefit gained from the adaptive model discretization, is demonstrated in the context of four different simulation scenarios focusing on solid continua. While presenting the results found for these examples, several properties of the proposed adaptive technique are discussed, e.g. the conservation of momentum as well as the existing correlation between the chosen refinement and coarsening patterns and the observed quality of the results.

  17. Adaptive DFT-based Interferometer Fringe Tracking

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) observatory at Mt. Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse.

  18. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  19. Adaptive control and noise suppression by a variable-gain gradient algorithm

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.; Mehta, R. S.

    1987-01-01

    An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.

  20. Efficient Algorithm for Optimizing Adaptive Quantum Metrology Processes

    NASA Astrophysics Data System (ADS)

    Hentschel, Alexander; Sanders, Barry C.

    2011-12-01

    Quantum-enhanced metrology infers an unknown quantity with accuracy beyond the standard quantum limit (SQL). Feedback-based metrological techniques are promising for beating the SQL but devising the feedback procedures is difficult and inefficient. Here we introduce an efficient self-learning swarm-intelligence algorithm for devising feedback-based quantum metrological procedures. Our algorithm can be trained with simulated or real-world trials and accommodates experimental imperfections, losses, and decoherence.

  1. Efficient algorithm for optimizing adaptive quantum metrology processes.

    PubMed

    Hentschel, Alexander; Sanders, Barry C

    2011-12-01

    Quantum-enhanced metrology infers an unknown quantity with accuracy beyond the standard quantum limit (SQL). Feedback-based metrological techniques are promising for beating the SQL but devising the feedback procedures is difficult and inefficient. Here we introduce an efficient self-learning swarm-intelligence algorithm for devising feedback-based quantum metrological procedures. Our algorithm can be trained with simulated or real-world trials and accommodates experimental imperfections, losses, and decoherence.

  2. Adaptive handoff algorithms based on self-organizing neural networks to enhance the quality of service of nonstationary traffic in heirarchical cellular networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2000-03-01

    Third-generation (3G) wireless networks, based on a hierarchical cellular structure, support tiered levels of multimedia services. These services can be categorized as real-time and delay-sensitive, or non-real-time and delay- insensitive. Each call carries demand for one or more services in parallel; each with a guaranteed quality of service (QoS). Roaming is handled by handoff procedures between base stations (BSs) and the mobile subscribers (MSs) within the network. Metrics such as the probabilities of handoff failure, dropped calls and blocked calls; handoff transition time; and handoff rate are used to evaluate the handoff schemes, which also directly affects QoS. Previous researchers have proposed a fuzzy logic system (FLS) with neural encoding of the rule base and probabilistic neural network to solve the handoff decision as a pattern recognition problem in the set of MS signal measurements and mobility amid fading path uncertainties. Both neural approaches evalute only voice traffic in a closed, single- layer network of uniform cells. This paper proposed a new topology-preserving, self-organizing neural network (SONN) for both handoff and admission control as part of an overall resource allocation (RA) problem to support QoS in a three- layer, wideband CDMA HCS with dynamic loading of multimedia services. MS profiles include simultaneous service requirements, which are mapped to a new set of variables, defined in terms of the network radio resources (RRs). Simulations of the new SONN-based algorithms under various operating scenarios of MS mobility, dynamic loading, active set size, and RR bounds, using published traffic models of 3G services, compare their performance with earlier approaches.

  3. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  4. Matched filter based iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Nepal, Ramesh; Zhang, Yan Rockee; Li, Zhengzheng; Blake, William

    2016-05-01

    Matched Filter sidelobes from diversified LPI waveform design and sensor resolution are two important considerations in radars and active sensors in general. Matched Filter sidelobes can potentially mask weaker targets, and low sensor resolution not only causes a high margin of error but also limits sensing in target-rich environment/ sector. The improvement in those factors, in part, concern with the transmitted waveform and consequently pulse compression techniques. An adaptive pulse compression algorithm is hence desired that can mitigate the aforementioned limitations. A new Matched Filter based Iterative Adaptive Approach, MF-IAA, as an extension to traditional Iterative Adaptive Approach, IAA, has been developed. MF-IAA takes its input as the Matched Filter output. The motivation here is to facilitate implementation of Iterative Adaptive Approach without disrupting the processing chain of traditional Matched Filter. Similar to IAA, MF-IAA is a user parameter free, iterative, weighted least square based spectral identification algorithm. This work focuses on the implementation of MF-IAA. The feasibility of MF-IAA is studied using a realistic airborne radar simulator as well as actual measured airborne radar data. The performance of MF-IAA is measured with different test waveforms, and different Signal-to-Noise (SNR) levels. In addition, Range-Doppler super-resolution using MF-IAA is investigated. Sidelobe reduction as well as super-resolution enhancement is validated. The robustness of MF-IAA with respect to different LPI waveforms and SNR levels is also demonstrated.

  5. Analysis of adaptive algorithms for an integrated communication network

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Barr, Matthew; Chong-Kwon, Kim

    1985-01-01

    Techniques were examined that trade communication bandwidth for decreased transmission delays. When the network is lightly used, these schemes attempt to use additional network resources to decrease communication delays. As the network utilization rises, the schemes degrade gracefully, still providing service but with minimal use of the network. Because the schemes use a combination of circuit and packet switching, they should respond to variations in the types and amounts of network traffic. Also, a combination of circuit and packet switching to support the widely varying traffic demands imposed on an integrated network was investigated. The packet switched component is best suited to bursty traffic where some delays in delivery are acceptable. The circuit switched component is reserved for traffic that must meet real time constraints. Selected packet routing algorithms that might be used in an integrated network were simulated. An integrated traffic places widely varying workload demands on a network. Adaptive algorithms were identified, ones that respond to both the transient and evolutionary changes that arise in integrated networks. A new algorithm was developed, hybrid weighted routing, that adapts to workload changes.

  6. An adaptive ant colony system algorithm for continuous-space optimization problems.

    PubMed

    Li, Yan-jun; Wu, Tie-jun

    2003-01-01

    Ant colony algorithms comprise a novel category of evolutionary computation methods for optimization problems, especially for sequencing-type combinatorial optimization problems. An adaptive ant colony algorithm is proposed in this paper to tackle continuous-space optimization problems, using a new objective-function-based heuristic pheromone assignment approach for pheromone update to filtrate solution candidates. Global optimal solutions can be reached more rapidly by self-adjusting the path searching behaviors of the ants according to objective values. The performance of the proposed algorithm is compared with a basic ant colony algorithm and a Square Quadratic Programming approach in solving two benchmark problems with multiple extremes. The results indicated that the efficiency and reliability of the proposed algorithm were greatly improved. PMID:12656341

  7. Color sorting algorithm based on K-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, BaoFeng; Huang, Qian

    2009-11-01

    In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.

  8. Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.

  9. New hybrid adaptive neuro-fuzzy algorithms for manipulator control with uncertainties- comparative study.

    PubMed

    Alavandar, Srinivasan; Nigam, M J

    2009-10-01

    Control of an industrial robot includes nonlinearities, uncertainties and external perturbations that should be considered in the design of control laws. In this paper, some new hybrid adaptive neuro-fuzzy control algorithms (ANFIS) have been proposed for manipulator control with uncertainties. These hybrid controllers consist of adaptive neuro-fuzzy controllers and conventional controllers. The outputs of these controllers are applied to produce the final actuation signal based on current position and velocity errors. Numerical simulation using the dynamic model of six DOF puma robot arm with uncertainties shows the effectiveness of the approach in trajectory tracking problems. Performance indices of RMS error, maximum error are used for comparison. It is observed that the hybrid adaptive neuro-fuzzy controllers perform better than only conventional/adaptive controllers and in particular hybrid controller structure consisting of adaptive neuro-fuzzy controller and critically damped inverse dynamics controller.

  10. Tree-based shortest-path routing algorithm

    NASA Astrophysics Data System (ADS)

    Long, Y. H.; Ho, T. K.; Rad, A. B.; Lam, S. P. S.

    1998-12-01

    A tree-based shortest path routing algorithm is introduced in this paper. With this algorithm, every network node can maintain a shortest path routing tree topology of the network with itself as the root. In this algorithm, every node constructs its own routing tree based upon its neighbors' routing trees. Initially, the routing tree at each node has the root only, the node itself. As information exchanges, every node's routing tree will evolve until a complete tree is obtained. This algorithm is a trade-off between distance vector algorithm and link state algorithm. Loops are automatically deleted, so there is no count-to- infinity effect. A simple routing tree information storage approach and a protocol data until format to transmit the tree information are given. Some special issues, such as adaptation to topology change, implementation of the algorithm on LAN, convergence and computation overhead etc., are also discussed in the paper.

  11. A self adaptive hybrid enhanced artificial bee colony algorithm for continuous optimization problems.

    PubMed

    Shan, Hai; Yasuda, Toshiyuki; Ohkura, Kazuhiro

    2015-06-01

    The artificial bee colony (ABC) algorithm is one of popular swarm intelligence algorithms that inspired by the foraging behavior of honeybee colonies. To improve the convergence ability, search speed of finding the best solution and control the balance between exploration and exploitation using this approach, we propose a self adaptive hybrid enhanced ABC algorithm in this paper. To evaluate the performance of standard ABC, best-so-far ABC (BsfABC), incremental ABC (IABC), and the proposed ABC algorithms, we implemented numerical optimization problems based on the IEEE Congress on Evolutionary Computation (CEC) 2014 test suite. Our experimental results show the comparative performance of standard ABC, BsfABC, IABC, and the proposed ABC algorithms. According to the results, we conclude that the proposed ABC algorithm is competitive to those state-of-the-art modified ABC algorithms such as BsfABC and IABC algorithms based on the benchmark problems defined by CEC 2014 test suite with dimension sizes of 10, 30, and 50, respectively.

  12. A self adaptive hybrid enhanced artificial bee colony algorithm for continuous optimization problems.

    PubMed

    Shan, Hai; Yasuda, Toshiyuki; Ohkura, Kazuhiro

    2015-06-01

    The artificial bee colony (ABC) algorithm is one of popular swarm intelligence algorithms that inspired by the foraging behavior of honeybee colonies. To improve the convergence ability, search speed of finding the best solution and control the balance between exploration and exploitation using this approach, we propose a self adaptive hybrid enhanced ABC algorithm in this paper. To evaluate the performance of standard ABC, best-so-far ABC (BsfABC), incremental ABC (IABC), and the proposed ABC algorithms, we implemented numerical optimization problems based on the IEEE Congress on Evolutionary Computation (CEC) 2014 test suite. Our experimental results show the comparative performance of standard ABC, BsfABC, IABC, and the proposed ABC algorithms. According to the results, we conclude that the proposed ABC algorithm is competitive to those state-of-the-art modified ABC algorithms such as BsfABC and IABC algorithms based on the benchmark problems defined by CEC 2014 test suite with dimension sizes of 10, 30, and 50, respectively. PMID:25982071

  13. Path Planning Algorithms for the Adaptive Sensor Fleet

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Hosler, Jeff

    2005-01-01

    The Adaptive Sensor Fleet (ASF) is a general purpose fleet management and planning system being developed by NASA in coordination with NOAA. The current mission of ASF is to provide the capability for autonomous cooperative survey and sampling of dynamic oceanographic phenomena such as current systems and algae blooms. Each ASF vessel is a software model that represents a real world platform that carries a variety of sensors. The OASIS platform will provide the first physical vessel, outfitted with the systems and payloads necessary to execute the oceanographic observations described in this paper. The ASF architecture is being designed for extensibility to accommodate heterogenous fleet elements, and is not limited to using the OASIS platform to acquire data. This paper describes the path planning algorithms developed for the acquisition phase of a typical ASF task. Given a polygonal target region to be surveyed, the region is subdivided according to the number of vessels in the fleet. The subdivision algorithm seeks a solution in which all subregions have equal area and minimum mean radius. Once the subregions are defined, a dynamic programming method is used to find a minimum-time path for each vessel from its initial position to its assigned region. This path plan includes the effects of water currents as well as avoidance of known obstacles. A fleet-level planning algorithm then shuffles the individual vessel assignments to find the overall solution which puts all vessels in their assigned regions in the minimum time. This shuffle algorithm may be described as a process of elimination on the sorted list of permutations of a cost matrix. All these path planning algorithms are facilitated by discretizing the region of interest onto a hexagonal tiling.

  14. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    NASA Astrophysics Data System (ADS)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-01

    A self-adaptive genetic algorithm for estimating Jiles-Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet's hysteresis loops, and the results are in good agreement with published data.

  15. Fast Adapting Ensemble: A New Algorithm for Mining Data Streams with Concept Drift

    PubMed Central

    Ortíz Díaz, Agustín; Ramos-Jiménez, Gonzalo; Frías Blanco, Isvani; Caballero Mota, Yailé; Morales-Bueno, Rafael

    2015-01-01

    The treatment of large data streams in the presence of concept drifts is one of the main challenges in the field of data mining, particularly when the algorithms have to deal with concepts that disappear and then reappear. This paper presents a new algorithm, called Fast Adapting Ensemble (FAE), which adapts very quickly to both abrupt and gradual concept drifts, and has been specifically designed to deal with recurring concepts. FAE processes the learning examples in blocks of the same size, but it does not have to wait for the batch to be complete in order to adapt its base classification mechanism. FAE incorporates a drift detector to improve the handling of abrupt concept drifts and stores a set of inactive classifiers that represent old concepts, which are activated very quickly when these concepts reappear. We compare our new algorithm with various well-known learning algorithms, taking into account, common benchmark datasets. The experiments show promising results from the proposed algorithm (regarding accuracy and runtime), handling different types of concept drifts. PMID:25879051

  16. A novel adaptive multi-resolution combined watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, QiWei

    2008-04-01

    The rapid development of IT and WWW technique, causing person frequently confronts with various kinds of authorized identification problem, especially the copyright problem of digital products. The digital watermarking technique was emerged as one kind of solutions. The balance between robustness and imperceptibility is always the object sought by related researchers. In order to settle the problem of robustness and imperceptibility, a novel adaptive multi-resolution combined digital image watermarking algorithm was proposed in this paper. In the proposed algorithm, we first decompose the watermark into several sub-bands, and according to its significance to embed the sub-band to different DWT coefficient of the carrier image. While embedding, the HVS was considered. So under the precondition of keeping the quality of image, the larger capacity of watermark can be embedding. The experimental results have shown that the proposed algorithm has better performance in the aspects of robustness and security. And with the same visual quality, the technique has larger capacity. So the unification of robustness and imperceptibility was achieved.

  17. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    PubMed

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  18. Maximal use of minimal libraries through the adaptive substituent reordering algorithm.

    PubMed

    Liang, Fan; Feng, Xiao-jiang; Lowry, Michael; Rabitz, Herschel

    2005-03-31

    This paper describes an adaptive algorithm for interpolation over a library of molecules subjected to synthesis and property assaying. Starting with a coarse sampling of the library compounds, the algorithm finds the optimal substituent orderings on all of the functionalized scaffold sites to allow for accurate property interpolation over all remaining compounds in the full library space. A previous paper introduced the concept of substituent reordering and a smoothness-based criterion to search for optimal orderings (Shenvi, N.; Geremia, J. M.; Rabitz, H. J. Phys. Chem. A 2003, 107, 2066). Here, we propose a data-driven root-mean-squared (RMS) criteria and a combined RMS/smoothness criteria as alternative methods for the discovery of optimal substituent orderings. Error propagation from the property measurements of the sampled compounds is determined to provide confidence intervals on the interpolated molecular property values, and a substituent rescaling technique is introduced to manage poorly designed/sampled libraries. Finally, various factors are explored that can influence the applicability and interpolation quality of the algorithm. An adaptive methodology is proposed to iteratively and efficiently use laboratory experiments to optimize these algorithmic factors, so that the accuracy of property predictions is maximized. The enhanced algorithm is tested on copolymer and transition metal complex libraries, and the results demonstrate the capability of the algorithm to accurately interpolate various properties of both molecular libraries.

  19. An improved cooperative adaptive cruise control (CACC) algorithm considering invalid communication

    NASA Astrophysics Data System (ADS)

    Wang, Pangwei; Wang, Yunpeng; Yu, Guizhen; Tang, Tieqiao

    2014-05-01

    For the Cooperative Adaptive Cruise Control (CACC) Algorithm, existing research studies mainly focus on how inter-vehicle communication can be used to develop CACC controller, the influence of the communication delays and lags of the actuators to the string stability. However, whether the string stability can be guaranteed when inter-vehicle communication is invalid partially has hardly been considered. This paper presents an improved CACC algorithm based on the sliding mode control theory and analyses the range of CACC controller parameters to maintain string stability. A dynamic model of vehicle spacing deviation in a platoon is then established, and the string stability conditions under improved CACC are analyzed. Unlike the traditional CACC algorithms, the proposed algorithm can ensure the functionality of the CACC system even if inter-vehicle communication is partially invalid. Finally, this paper establishes a platoon of five vehicles to simulate the improved CACC algorithm in MATLAB/Simulink, and the simulation results demonstrate that the improved CACC algorithm can maintain the string stability of a CACC platoon through adjusting the controller parameters and enlarging the spacing to prevent accidents. With guaranteed string stability, the proposed CACC algorithm can prevent oscillation of vehicle spacing and reduce chain collision accidents under real-world circumstances. This research proposes an improved CACC algorithm, which can guarantee the string stability when inter-vehicle communication is invalid.

  20. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  1. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  2. Nonlinear fitness space structure adaptation and principal component analysis in genetic algorithms: an application to x-ray reflectivity analysis

    NASA Astrophysics Data System (ADS)

    Tiilikainen, J.; Tilli, J.-M.; Bosund, V.; Mattila, M.; Hakkarainen, T.; Airaksinen, V.-M.; Lipsanen, H.

    2007-01-01

    Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present.

  3. Transform Domain Robust Variable Step Size Griffiths' Adaptive Algorithm for Noise Cancellation in ECG

    NASA Astrophysics Data System (ADS)

    Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.

    2011-12-01

    The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.

  4. An adaptive inverse iteration algorithm using interpolating multiwavelets for structural eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Wang, Youming; Chen, Xuefeng; He, Zhengjia

    2011-02-01

    Structural eigenvalues have been broadly applied in modal analysis, damage detection, vibration control, etc. In this paper, the interpolating multiwavelets are custom designed based on stable completion method to solve structural eigenvalue problems. The operator-orthogonality of interpolating multiwavelets gives rise to highly sparse multilevel stiffness and mass matrices of structural eigenvalue problems and permits the incremental computation of the eigenvalue solution in an efficient manner. An adaptive inverse iteration algorithm using the interpolating multiwavelets is presented to solve structural eigenvalue problems. Numerical examples validate the accuracy and efficiency of the proposed algorithm.

  5. Should the parameters of a BCI translation algorithm be continually adapted?

    PubMed

    McFarland, Dennis J; Sarnacki, William A; Wolpaw, Jonathan R

    2011-07-15

    People with or without motor disabilities can learn to control sensorimotor rhythms (SMRs) recorded from the scalp to move a computer cursor in one or more dimensions or can use the P300 event-related potential as a control signal to make discrete selections. Data collected from individuals using an SMR-based or P300-based BCI were evaluated offline to estimate the impact on performance of continually adapting the parameters of the translation algorithm during BCI operation. The performance of the SMR-based BCI was enhanced by adaptive updating of the feature weights or adaptive normalization of the features. In contrast, P300 performance did not benefit from either of these procedures. PMID:21571004

  6. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  7. A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media

    SciTech Connect

    Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.

    2008-04-01

    In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.

  8. A DVH-guided IMRT optimization algorithm for automatic treatment planning and adaptive radiotherapy replanning

    SciTech Connect

    Zarepisheh, Masoud; Li, Nan; Long, Troy; Romeijn, H. Edwin; Tian, Zhen; Jia, Xun; Jiang, Steve B.

    2014-06-15

    Purpose: To develop a novel algorithm that incorporates prior treatment knowledge into intensity modulated radiation therapy optimization to facilitate automatic treatment planning and adaptive radiotherapy (ART) replanning. Methods: The algorithm automatically creates a treatment plan guided by the DVH curves of a reference plan that contains information on the clinician-approved dose-volume trade-offs among different targets/organs and among different portions of a DVH curve for an organ. In ART, the reference plan is the initial plan for the same patient, while for automatic treatment planning the reference plan is selected from a library of clinically approved and delivered plans of previously treated patients with similar medical conditions and geometry. The proposed algorithm employs a voxel-based optimization model and navigates the large voxel-based Pareto surface. The voxel weights are iteratively adjusted to approach a plan that is similar to the reference plan in terms of the DVHs. If the reference plan is feasible but not Pareto optimal, the algorithm generates a Pareto optimal plan with the DVHs better than the reference ones. If the reference plan is too restricting for the new geometry, the algorithm generates a Pareto plan with DVHs close to the reference ones. In both cases, the new plans have similar DVH trade-offs as the reference plans. Results: The algorithm was tested using three patient cases and found to be able to automatically adjust the voxel-weighting factors in order to generate a Pareto plan with similar DVH trade-offs as the reference plan. The algorithm has also been implemented on a GPU for high efficiency. Conclusions: A novel prior-knowledge-based optimization algorithm has been developed that automatically adjust the voxel weights and generate a clinical optimal plan at high efficiency. It is found that the new algorithm can significantly improve the plan quality and planning efficiency in ART replanning and automatic treatment

  9. The Adaptive Analysis of Visual Cognition using Genetic Algorithms

    PubMed Central

    Cook, Robert G.; Qadri, Muhammad A. J.

    2014-01-01

    Two experiments used a novel, open-ended, and adaptive test procedure to examine visual cognition in animals. Using a genetic algorithm, a pigeon was tested repeatedly from a variety of different initial conditions for its solution to an intermediate brightness search task. On each trial, the animal had to accurately locate and peck a target element of intermediate brightness from among a variable number of surrounding darker and lighter distractor elements. Displays were generated from six parametric variables, or genes (distractor number, element size, shape, spacing, target brightness, distractor brightness). Display composition changed over time, or evolved, as a function of the bird’s differential accuracy within the population of values for each gene. Testing three randomized initial conditions and one set of controlled initial conditions, element size and number of distractors were identified as the most important factors controlling search accuracy, with distractor brightness, element shape, and spacing making secondary contributions. The resulting changes in this multidimensional stimulus space suggested the existence of a set of conditions that the bird repeatedly converged upon regardless of initial conditions. This psychological “attractor” represents the cumulative action of the cognitive operations used by the pigeon in solving and performing this search task. The results are discussed regarding their implications for visual cognition in pigeons and the usefulness of adaptive, subject-driven experimentation for investigating human and animal cognition more generally. PMID:24000905

  10. Algorithm for localized adaptive diffuse optical tomography and its application in bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Naser, Mohamed A.; Patterson, Michael S.; Wong, John W.

    2014-04-01

    A reconstruction algorithm for diffuse optical tomography based on diffusion theory and finite element method is described. The algorithm reconstructs the optical properties in a permissible domain or region-of-interest to reduce the number of unknowns. The algorithm can be used to reconstruct optical properties for a segmented object (where a CT-scan or MRI is available) or a non-segmented object. For the latter, an adaptive segmentation algorithm merges contiguous regions with similar optical properties thereby reducing the number of unknowns. In calculating the Jacobian matrix the algorithm uses an efficient direct method so the required time is comparable to that needed for a single forward calculation. The reconstructed optical properties using segmented, non-segmented, and adaptively segmented 3D mouse anatomy (MOBY) are used to perform bioluminescence tomography (BLT) for two simulated internal sources. The BLT results suggest that the accuracy of reconstruction of total source power obtained without the segmentation provided by an auxiliary imaging method such as x-ray CT is comparable to that obtained when using perfect segmentation.

  11. Adaptive vector quantization of MR images using online k-means algorithm

    NASA Astrophysics Data System (ADS)

    Shademan, Azad; Zia, Mohammad A.

    2001-12-01

    The k-means algorithm is widely used to design image codecs using vector quantization (VQ). In this paper, we focus on an adaptive approach to implement a VQ technique using the online version of k-means algorithm, in which the size of the codebook is adapted continuously to the statistical behavior of the image. Based on the statistical analysis of the feature space, a set of thresholds are designed such that those codewords corresponding to the low-density clusters would be removed from the codebook and hence, resulting in a higher bit-rate efficiency. Applications of this approach would be in telemedicine, where sequences of highly correlated medical images, e.g. consecutive brain slices, are transmitted over a low bit-rate channel. We have applied this algorithm on magnetic resonance (MR) images and the simulation results on a sample sequence are given. The proposed method has been compared to the standard k-means algorithm in terms of PSNR, MSE, and elapsed time to complete the algorithm.

  12. Optimized adaptation algorithm for HEVC/H.265 dynamic adaptive streaming over HTTP using variable segment duration

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2016-04-01

    Adaptive video streaming using HTTP has become popular in recent years for commercial video delivery. The recent MPEG-DASH standard allows interoperability and adaptability between servers and clients from different vendors. The delivery of the MPD (Media Presentation Description) files in DASH and the DASH client behaviours are beyond the scope of the DASH standard. However, the different adaptation algorithms employed by the clients do affect the overall performance of the system and users' QoE (Quality of Experience), hence the need for research in this field. Moreover, standard DASH delivery is based on fixed segments of the video. However, there is no standard segment duration for DASH where various fixed segment durations have been employed by different commercial solutions and researchers with their own individual merits. Most recently, the use of variable segment duration in DASH has emerged but only a few preliminary studies without practical implementation exist. In addition, such a technique requires a DASH client to be aware of segment duration variations, and this requirement and the corresponding implications on the DASH system design have not been investigated. This paper proposes a segment-duration-aware bandwidth estimation and next-segment selection adaptation strategy for DASH. Firstly, an MPD file extension scheme to support variable segment duration is proposed and implemented in a realistic hardware testbed. The scheme is tested on a DASH client, and the tests and analysis have led to an insight on the time to download next segment and the buffer behaviour when fetching and switching between segments of different playback durations. Issues like sustained buffering when switching between segments of different durations and slow response to changing network conditions are highlighted and investigated. An enhanced adaptation algorithm is then proposed to accurately estimate the bandwidth and precisely determine the time to download the next

  13. Image Watermarking Based on Adaptive Models of Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Khawne, Amnach; Hamamoto, Kazuhiko; Chitsobhuk, Orachat

    This paper proposes a digital image watermarking based on adaptive models of human visual perception. The algorithm exploits the local activities estimated from wavelet coefficients of each subband to adaptively control the luminance masking. The adaptive luminance is thus delicately combined with the contrast masking and edge detection and adopted as a visibility threshold. With the proposed combination of adaptive visual sensitivity parameters, the proposed perceptual model can be more appropriate to the different characteristics of various images. The weighting function is chosen such that the fidelity, imperceptibility and robustness could be preserved without making any perceptual difference to the image quality.

  14. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  15. Adaptive ILC algorithms of nonlinear continuous systems with non-parametric uncertainties for non-repetitive trajectory tracking

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Dong; Lv, Mang-Mang; Ho, John K. L.

    2016-07-01

    In this article, two adaptive iterative learning control (ILC) algorithms are presented for nonlinear continuous systems with non-parametric uncertainties. Unlike general ILC techniques, the proposed adaptive ILC algorithms allow that both the initial error at each iteration and the reference trajectory are iteration-varying in the ILC process, and can achieve non-repetitive trajectory tracking beyond a small initial time interval. Compared to the neural network or fuzzy system-based adaptive ILC schemes and the classical ILC methods, in which the number of iterative variables is generally larger than or equal to the number of control inputs, the first adaptive ILC algorithm proposed in this paper uses just two iterative variables, while the second even uses a single iterative variable provided that some bound information on system dynamics is known. As a result, the memory space in real-time ILC implementations is greatly reduced.

  16. Algorithme d'adaptation du filtre de Kalman aux variations soudaines de bruit

    NASA Astrophysics Data System (ADS)

    Canciu, Vintila

    This research targets the case of Kalman filtering as applied to linear time-invariant systems having unknown process noise covariance and measurement noise covariance matrices and addresses the problem represented by the incomplete a priori knowledge of these two filter initialization parameters. The goal of this research is to determine in realtime both the process covariance matrix and the noise covariance matrix in the context of adaptive Kalman filtering. The resultant filter, called evolutionary adaptive Kalman filter, is able to adapt to sudden noise variations and constitutes a hybrid solution for adaptive Kalman filtering based on metaheuristic algorithms. MATLAB/Simulink simulation using several processes and covariance matrices plus comparison with other filters was selected as validation method. The Cramer-Rae Lower Bound (CRLB) was used as performance criterion. The thesis begins with a description of the problem under consideration (the design of a Kalman filter that is able to adapt to sudden noise variations) followed by a typical application (INS-GPS integrated navigation system) and by a statistical analysis of publications related to adaptive Kalman filtering. Next, the thesis presents the current architectures of the adaptive Kalman filtering: the innovation adaptive estimator (IAE) and the multiple model adaptive estimator (MMAE). It briefly presents their formulation, their behavior, and the limit of their performances. The thesis continues with the architectural synthesis of the evolutionary adaptive Kalman filter. The steps involved in the solution of the problem under consideration is also presented: an analysis of Kalman filtering and sub-optimal filtering methods, a comparison of current adaptive Kalman and sub-optimal filtering methods, the emergence of evolutionary adaptive Kalman filter as an enrichment of sub-optimal filtering with the help of biological-inspired computational intelligence methods, and the step-by-step architectural

  17. Incrementing data quality of multi-frequency echograms using the Adaptive Wiener Filter (AWF) denoising algorithm

    NASA Astrophysics Data System (ADS)

    Peña, M.

    2016-10-01

    Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.

  18. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  19. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  20. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  1. Self-adapting root-MUSIC algorithm and its real-valued formulation for acoustic vector sensor array

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhang, Guo-jun; Xue, Chen-yang; Zhang, Wen-dong; Xiong, Ji-jun

    2012-12-01

    In this paper, based on the root-MUSIC algorithm for acoustic pressure sensor array, a new self-adapting root-MUSIC algorithm for acoustic vector sensor array is proposed by self-adaptive selecting the lead orientation vector, and its real-valued formulation by Forward-Backward(FB) smoothing and real-valued inverse covariance matrix is also proposed, which can reduce the computational complexity and distinguish the coherent signals. The simulation experiment results show the better performance of two new algorithm with low Signal-to-Noise (SNR) in direction of arrival (DOA) estimation than traditional MUSIC algorithm, and the experiment results using MEMS vector hydrophone array in lake trails show the engineering practicability of two new algorithms.

  2. Online adaptive policy learning algorithm for H∞ state feedback control of unknown affine nonlinear discrete-time systems.

    PubMed

    Zhang, Huaguang; Qin, Chunbin; Jiang, Bin; Luo, Yanhong

    2014-12-01

    The problem of H∞ state feedback control of affine nonlinear discrete-time systems with unknown dynamics is investigated in this paper. An online adaptive policy learning algorithm (APLA) based on adaptive dynamic programming (ADP) is proposed for learning in real-time the solution to the Hamilton-Jacobi-Isaacs (HJI) equation, which appears in the H∞ control problem. In the proposed algorithm, three neural networks (NNs) are utilized to find suitable approximations of the optimal value function and the saddle point feedback control and disturbance policies. Novel weight updating laws are given to tune the critic, actor, and disturbance NNs simultaneously by using data generated in real-time along the system trajectories. Considering NN approximation errors, we provide the stability analysis of the proposed algorithm with Lyapunov approach. Moreover, the need of the system input dynamics for the proposed algorithm is relaxed by using a NN identification scheme. Finally, simulation examples show the effectiveness of the proposed algorithm. PMID:25095274

  3. An adaptive displacement estimation algorithm for improved reconstruction of thermal strain.

    PubMed

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M; Tillman, Bryan; Leers, Steven A; Kim, Kang

    2015-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas' estimator and time-shift estimators such as normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas' estimator is limited by phase-wrapping and NXcorr performs poorly when the SNR is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas' estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas' estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas' estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI showed that the adaptive displacement estimator was less biased than either Loupas' estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7 to 350% and the spatial accuracy by 1.2 to 23.0% (P < 0.001). An ex vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and resulted in improved strain reconstruction.

  4. An Adaptive Displacement Estimation Algorithm for Improved Reconstruction of Thermal Strain

    PubMed Central

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M.; Tillman, Bryan; Leers, Steven A.; Kim, Kang

    2014-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas’ estimator and time-shift estimators like normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas’ estimator is limited by phase-wrapping and NXcorr performs poorly when the signal-to-noise ratio (SNR) is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas’ estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex-vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas’ estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas’ estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI using Field II showed that the adaptive displacement estimator was less biased than either Loupas’ estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7–350% and the spatial accuracy by 1.2–23.0% (p < 0.001). An ex-vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and results in improved strain reconstruction. PMID:25585398

  5. Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations

    SciTech Connect

    Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both

  6. A New Modified Artificial Bee Colony Algorithm with Exponential Function Adaptive Steps

    PubMed Central

    Mao, Wei; Li, Hao-ru

    2016-01-01

    As one of the most recent popular swarm intelligence techniques, artificial bee colony algorithm is poor at exploitation and has some defects such as slow search speed, poor population diversity, the stagnation in the working process, and being trapped into the local optimal solution. The purpose of this paper is to develop a new modified artificial bee colony algorithm in view of the initial population structure, subpopulation groups, step updating, and population elimination. Further, depending on opposition-based learning theory and the new modified algorithms, an improved S-type grouping method is proposed and the original way of roulette wheel selection is substituted through sensitivity-pheromone way. Then, an adaptive step with exponential functions is designed for replacing the original random step. Finally, based on the new test function versions CEC13, six benchmark functions with the dimensions D = 20 and D = 40 are chosen and applied in the experiments for analyzing and comparing the iteration speed and accuracy of the new modified algorithms. The experimental results show that the new modified algorithm has faster and more stable searching and can quickly increase poor population diversity and bring out the global optimal solutions. PMID:27293426

  7. A New Modified Artificial Bee Colony Algorithm with Exponential Function Adaptive Steps.

    PubMed

    Mao, Wei; Lan, Heng-You; Li, Hao-Ru

    2016-01-01

    As one of the most recent popular swarm intelligence techniques, artificial bee colony algorithm is poor at exploitation and has some defects such as slow search speed, poor population diversity, the stagnation in the working process, and being trapped into the local optimal solution. The purpose of this paper is to develop a new modified artificial bee colony algorithm in view of the initial population structure, subpopulation groups, step updating, and population elimination. Further, depending on opposition-based learning theory and the new modified algorithms, an improved S-type grouping method is proposed and the original way of roulette wheel selection is substituted through sensitivity-pheromone way. Then, an adaptive step with exponential functions is designed for replacing the original random step. Finally, based on the new test function versions CEC13, six benchmark functions with the dimensions D = 20 and D = 40 are chosen and applied in the experiments for analyzing and comparing the iteration speed and accuracy of the new modified algorithms. The experimental results show that the new modified algorithm has faster and more stable searching and can quickly increase poor population diversity and bring out the global optimal solutions. PMID:27293426

  8. Speech Enhancement based on Compressive Sensing Algorithm

    NASA Astrophysics Data System (ADS)

    Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel

    2013-12-01

    There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.

  9. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  10. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

    PubMed

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  11. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

    PubMed

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm. PMID:24697395

  12. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    SciTech Connect

    Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  13. An Adaptable Power System with Software Control Algorithm

    NASA Technical Reports Server (NTRS)

    Castell, Karen; Bay, Mike; Hernandez-Pellerano, Amri; Ha, Kong

    1998-01-01

    A low cost, flexible and modular spacecraft power system design was developed in response to a call for an architecture that could accommodate multiple missions in the small to medium load range. Three upcoming satellites will use this design, with one launch date in 1999 and two in the year 2000. The design consists of modular hardware that can be scaled up or down, without additional cost, to suit missions in the 200 to 600 Watt orbital average load range. The design will be applied to satellite orbits that are circular, polar elliptical and a libration point orbit. Mission unique adaptations are accomplished in software and firmware. In designing this advanced, adaptable power system, the major goals were reduction in weight volume and cost. This power system design represents reductions in weight of 78 percent, volume of 86 percent and cost of 65 percent from previous comparable systems. The efforts to miniaturize the electronics without sacrificing performance has created streamlined power electronics with control functions residing in the system microprocessor. The power system design can handle any battery size up to 50 Amp-hour and any battery technology. The three current implementations will use both nickel cadmium and nickel hydrogen batteries ranging in size from 21 to 50 Amp-hours. Multiple batteries can be used by adding another battery module. Any solar cell technology can be used and various array layouts can be incorporated with no change in Power System Electronics (PSE) hardware. Other features of the design are the standardized interfaces between cards and subsystems and immunity to radiation effects up to 30 krad Total Ionizing Dose (TID) and 35 Mev/cm(exp 2)-kg for Single Event Effects (SEE). The control algorithm for the power system resides in a radiation-hardened microprocessor. A table driven software design allows for flexibility in mission specific requirements. By storing critical power system constants in memory, modifying the system

  14. Genetic algorithm-based form error evaluation

    NASA Astrophysics Data System (ADS)

    Cui, Changcai; Li, Bing; Huang, Fugui; Zhang, Rencheng

    2007-07-01

    Form error evaluation of geometrical products is a nonlinear optimization problem, for which a solution has been attempted by different methods with some complexity. A genetic algorithm (GA) was developed to deal with the problem, which was proved simple to understand and realize, and its key techniques have been investigated in detail. Firstly, the fitness function of GA was discussed emphatically as a bridge between GA and the concrete problems to be solved. Secondly, the real numbers-based representation of the desired solutions in the continual space optimization problem was discussed. Thirdly, many improved evolutionary strategies of GA were described on emphasis. These evolutionary strategies were the selection operation of 'odd number selection plus roulette wheel selection', the crossover operation of 'arithmetic crossover between near relatives and far relatives' and the mutation operation of 'adaptive Gaussian' mutation. After evolutions from generation to generation with the evolutionary strategies, the initial population produced stochastically around the least-squared solutions of the problem would be updated and improved iteratively till the best chromosome or individual of GA appeared. Finally, some examples were given to verify the evolutionary method. Experimental results show that the GA-based method can find desired solutions that are superior to the least-squared solutions except for a few examples in which the GA-based method can obtain similar results to those by the least-squared method. Compared with other optimization techniques, the GA-based method can obtain almost equal results but with less complicated models and computation time.

  15. Digital speech enhancement based on DTOMP and adaptive quantile

    NASA Astrophysics Data System (ADS)

    Wang, Anna; Zhou, Xiaoxing; Xue, Changliang; Sun, Xiyan; Sun, Hongying

    2013-03-01

    Compressed Sensing (CS) that can effectively extract the information contained in the signal is a new sampling theory based on signal sparseness. This paper applies CS theory in digital speech signal enhancement processing, proposes an adaptive quantile method for the noise power estimation and combines the improved double-threshold orthogonal matching pursuit algorithm for speech reconstruction, then achieves speech enhancement processing. Compared with the simulation results of the spectral subtraction and the subspace algorithm, the experiment results verify the feasibility and effectiveness of the algorithm proposed in this paper applied to speech enhancement processing.

  16. New Approach for IIR Adaptive Lattice Filter Structure Using Simultaneous Perturbation Algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Jorge Ivan Medina; Nakano, Kazushi; Higuchi, Kohji

    Adaptive infinite impulse response (IIR), or recursive, filters are less attractive mainly because of the stability and the difficulties associated with their adaptive algorithms. Therefore, in this paper the adaptive IIR lattice filters are studied in order to devise algorithms that preserve the stability of the corresponding direct-form schemes. We analyze the local properties of stationary points, a transformation achieving this goal is suggested, which gives algorithms that can be efficiently implemented. Application to the Steiglitz-McBride (SM) and Simple Hyperstable Adaptive Recursive Filter (SHARF) algorithms is presented. Also a modified version of Simultaneous Perturbation Stochastic Approximation (SPSA) is presented in order to get the coefficients in a lattice form more efficiently and with a lower computational cost and complexity. The results are compared with previous lattice versions of these algorithms. These previous lattice versions may fail to preserve the stability of stationary points.

  17. Feedback in Videogame-Based Adaptive Training

    ERIC Educational Resources Information Center

    Rivera, Iris Daliz

    2010-01-01

    The field of training has been changing rapidly due to advances in technology such as videogame-based adaptive training. Videogame-based adaptive training has provided flexibility and adaptability for training in cost-effective ways. Although this method of training may have many benefits for the trainee, current research has not kept up to pace…

  18. Remote sensing image subpixel mapping based on adaptive differential evolution.

    PubMed

    Zhong, Yanfei; Zhang, Liangpei

    2012-10-01

    In this paper, a novel subpixel mapping algorithm based on an adaptive differential evolution (DE) algorithm, namely, adaptive-DE subpixel mapping (ADESM), is developed to perform the subpixel mapping task for remote sensing images. Subpixel mapping may provide a fine-resolution map of class labels from coarser spectral unmixing fraction images, with the assumption of spatial dependence. In ADESM, to utilize DE, the subpixel mapping problem is transformed into an optimization problem by maximizing the spatial dependence index. The traditional DE algorithm is an efficient and powerful population-based stochastic global optimizer in continuous optimization problems, but it cannot be applied to the subpixel mapping problem in a discrete search space. In addition, it is not an easy task to properly set control parameters in DE. To avoid these problems, this paper utilizes an adaptive strategy without user-defined parameters, and a reversible-conversion strategy between continuous space and discrete space, to improve the classical DE algorithm. During the process of evolution, they are further improved by enhanced evolution operators, e.g., mutation, crossover, repair, exchange, insertion, and an effective local search to generate new candidate solutions. Experimental results using different types of remote images show that the ADESM algorithm consistently outperforms the previous subpixel mapping algorithms in all the experiments. Based on sensitivity analysis, ADESM, with its self-adaptive control parameter setting, is better than, or at least comparable to, the standard DE algorithm, when considering the accuracy of subpixel mapping, and hence provides an effective new approach to subpixel mapping for remote sensing imagery.

  19. Using patient-specific phantoms to evaluate deformable image registration algorithms for adaptive radiation therapy.

    PubMed

    Stanley, Nick; Glide-Hurst, Carri; Kim, Jinkoo; Adams, Jeffrey; Li, Shunshan; Wen, Ning; Chetty, Indrin J; Zhong, Hualiang

    2013-11-04

    The quality of adaptive treatment planning depends on the accuracy of its underlying deformable image registration (DIR). The purpose of this study is to evaluate the performance of two DIR algorithms, B-spline-based deformable multipass (DMP) and deformable demons (Demons), implemented in a commercial software package. Evaluations were conducted using both computational and physical deformable phantoms. Based on a finite element method (FEM), a total of 11 computational models were developed from a set of CT images acquired from four lung and one prostate cancer patients. FEM generated displacement vector fields (DVF) were used to construct the lung and prostate image phantoms. Based on a fast-Fourier transform technique, image noise power spectrum was incorporated into the prostate image phantoms to create simulated CBCT images. The FEM-DVF served as a gold standard for verification of the two registration algorithms performed on these phantoms. The registration algorithms were also evaluated at the homologous points quantified in the CT images of a physical lung phantom. The results indicated that the mean errors of the DMP algorithm were in the range of 1.0 ~ 3.1 mm for the computational phantoms and 1.9 mm for the physical lung phantom. For the computational prostate phantoms, the corresponding mean error was 1.0-1.9 mm in the prostate, 1.9-2.4mm in the rectum, and 1.8-2.1 mm over the entire patient body. Sinusoidal errors induced by B-spline interpolations were observed in all the displacement profiles of the DMP registrations. Regions of large displacements were observed to have more registration errors. Patient-specific FEM models have been developed to evaluate the DIR algorithms implemented in the commercial software package. It has been found that the accuracy of these algorithms is patient dependent and related to various factors including tissue deformation magnitudes and image intensity gradients across the regions of interest. This may suggest that

  20. Control algorithms of liquid crystal phased arrays used as adaptive optic correctors

    NASA Astrophysics Data System (ADS)

    Dayton, David; Gonglewski, John; Browne, Stephen

    2006-08-01

    Multi-segment liquid crystal phased arrays have been demonstrated as adaptive optics elements for correction of atmospheric turbulence. High speed dual-frequency nematic liquid crystal has sufficient bandwidth to keep up with moderate atmospheric Greenwood frequencies. However the segmented piston correction only spatial nature of the devices requires novel approaches to control algorithms especially when used with Shack-Hartmann wave front sensors. In this presentation we explore approaches and their effects on closed loop Strehl ratios. A Zernike modal based approach has produced the best results. The presentation will contain results from experiments with a Meadowlark optics liquid crystal device.

  1. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm.

  2. Vectorizable algorithms for adaptive schemes for rapid analysis of SSME flows

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley

    1987-01-01

    An initial study into vectorizable algorithms for use in adaptive schemes for various types of boundary value problems is described. The focus is on two key aspects of adaptive computational methods which are crucial in the use of such methods (for complex flow simulations such as those in the Space Shuttle Main Engine): the adaptive scheme itself and the applicability of element-by-element matrix computations in a vectorizable format for rapid calculations in adaptive mesh procedures.

  3. A parallel dynamic load balancing algorithm for 3-D adaptive unstructured grids

    NASA Technical Reports Server (NTRS)

    Vidwans, A.; Kallinderis, Y.; Venkatakrishnan, V.

    1993-01-01

    Adaptive local grid refinement and coarsening results in unequal distribution of workload among the processors of a parallel system. A novel method for balancing the load in cases of dynamically changing tetrahedral grids is developed. The approach employs local exchange of cells among processors in order to redistribute the load equally. An important part of the load balancing algorithm is the method employed by a processor to determine which cells within its subdomain are to be exchanged. Two such methods are presented and compared. The strategy for load balancing is based on the Divide-and-Conquer approach which leads to an efficient parallel algorithm. This method is implemented on a distributed-memory MIMD system.

  4. Comparative study of adaptive-noise-cancellation algorithms for intrusion detection systems

    SciTech Connect

    Claassen, J.P.; Patterson, M.M.

    1981-01-01

    Some intrusion detection systems are susceptible to nonstationary noise resulting in frequent nuisance alarms and poor detection when the noise is present. Adaptive inverse filtering for single channel systems and adaptive noise cancellation for two channel systems have both demonstrated good potential in removing correlated noise components prior detection. For such noise susceptible systems the suitability of a noise reduction algorithm must be established in a trade-off study weighing algorithm complexity against performance. The performance characteristics of several distinct classes of algorithms are established through comparative computer studies using real signals. The relative merits of the different algorithms are discussed in the light of the nature of intruder and noise signals.

  5. An Intrusion Detection Algorithm Based On NFPA

    NASA Astrophysics Data System (ADS)

    Anming, Zhong

    A process oriented intrusion detection algorithm based on Probabilistic Automaton with No Final probabilities (NFPA) is introduced, system call sequence of process is used as the source data. By using information in system call sequence of normal process and system call sequence of anomaly process, the anomaly detection and the misuse detection are efficiently combined. Experiments show better performance of our algorithm compared to the classical algorithm in this field.

  6. A meta-learning system based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    The design of an efficient machine learning process through self-adaptation is a great challenge. The goal of meta-learning is to build a self-adaptive learning system that is constantly adapting to its specific (and dynamic) environment. To that end, the meta-learning mechanism must improve its bias dynamically by updating the current learning strategy in accordance with its available experiences or meta-knowledge. We suggest using genetic algorithms as the basis of an adaptive system. In this work, we propose a meta-learning system based on a combination of the a priori and a posteriori concepts. A priori refers to input information and knowledge available at the beginning in order to built and evolve one or more sets of parameters by exploiting the context of the system"s information. The self-learning component is based on genetic algorithms and neural Darwinism. A posteriori refers to the implicit knowledge discovered by estimation of the future states of parameters and is also applied to the finding of optimal parameters values. The in-progress research presented here suggests a framework for the discovery of knowledge that can support human experts in their intelligence information assessment tasks. The conclusion presents avenues for further research in genetic algorithms and their capability to learn to learn.

  7. An adaptive guidance algorithm for an aerodynamically assisted orbital plane change maneuver. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Blissit, J. A.

    1986-01-01

    Using analysis results from the post trajectory optimization program, an adaptive guidance algorithm is developed to compensate for density, aerodynamic and thrust perturbations during an atmospheric orbital plane change maneuver. The maneuver offers increased mission flexibility along with potential fuel savings for future reentry vehicles. Although designed to guide a proposed NASA Entry Research Vehicle, the algorithm is sufficiently generic for a range of future entry vehicles. The plane change analysis provides insight suggesting a straight-forward algorithm based on an optimized nominal command profile. Bank angle, angle of attack, and engine thrust level, ignition and cutoff times are modulated to adjust the vehicle's trajectory to achieve the desired end-conditions. A performance evaluation of the scheme demonstrates a capability to guide to within 0.05 degrees of the desired plane change and five nautical miles of the desired apogee altitude while maintaining heating constraints. The algorithm is tested under off-nominal conditions of + or -30% density biases, two density profile models, + or -15% aerodynamic uncertainty, and a 33% thrust loss and for various combinations of these conditions.

  8. Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    NASA Technical Reports Server (NTRS)

    Matthews, Bryan L.; Srivastava, Ashok N.

    2010-01-01

    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.

  9. A novel algorithm for real-time adaptive signal detection and identification

    SciTech Connect

    Sleefe, G.E.; Ladd, M.D.; Gallegos, D.E.; Sicking, C.W.; Erteza, I.A.

    1998-04-01

    This paper describes a novel digital signal processing algorithm for adaptively detecting and identifying signals buried in noise. The algorithm continually computes and updates the long-term statistics and spectral characteristics of the background noise. Using this noise model, a set of adaptive thresholds and matched digital filters are implemented to enhance and detect signals that are buried in the noise. The algorithm furthermore automatically suppresses coherent noise sources and adapts to time-varying signal conditions. Signal detection is performed in both the time-domain and the frequency-domain, thereby permitting the detection of both broad-band transients and narrow-band signals. The detection algorithm also provides for the computation of important signal features such as amplitude, timing, and phase information. Signal identification is achieved through a combination of frequency-domain template matching and spectral peak picking. The algorithm described herein is well suited for real-time implementation on digital signal processing hardware. This paper presents the theory of the adaptive algorithm, provides an algorithmic block diagram, and demonstrate its implementation and performance with real-world data. The computational efficiency of the algorithm is demonstrated through benchmarks on specific DSP hardware. The applications for this algorithm, which range from vibration analysis to real-time image processing, are also discussed.

  10. An adaptive multi-level simulation algorithm for stochastic biological systems

    SciTech Connect

    Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  11. An adaptive multi-level simulation algorithm for stochastic biological systems

    NASA Astrophysics Data System (ADS)

    Lester, C.; Yates, C. A.; Giles, M. B.; Baker, R. E.

    2015-01-01

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  12. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data. PMID:18348941

  13. Moving target detection algorithm based on Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Wang, Zhihua; Kai, Du; Zhang, Xiandong

    2013-07-01

    In real-time video surveillance system, background noise and disturbance for the detection of moving objects will have a significant impact. The traditional Gaussian mixture model;GMM&;has strong adaptive various complex background ability, but slow convergence speed and vulnerable to illumination change influence. the paper proposes an improved moving target detection algorithm based on Gaussian mixture model which increase the convergence rate of foreground to the background model transformation and introducing the concept of the changing factors, through the three frame differential method solved light mutation problem. The results show that this algorithm can improve the accuracy of the moving object detection, and has good stability and real-time.

  14. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm.

    PubMed

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938

  15. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm.

    PubMed

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.

  16. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm

    PubMed Central

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938

  17. Adaptive conventional power system stabilizer based on artificial neural network

    SciTech Connect

    Kothari, M.L.; Segal, R.; Ghodki, B.K.

    1995-12-31

    This paper deals with an artificial neural network (ANN) based adaptive conventional power system stabilizer (PSS). The ANN comprises an input layer, a hidden layer and an output layer. The input vector to the ANN comprises real power (P) and reactive power (Q), while the output vector comprises optimum PSS parameters. A systematic approach for generating training set covering wide range of operating conditions, is presented. The ANN has been trained using back-propagation training algorithm. Investigations reveal that the dynamic performance of ANN based adaptive conventional PSS is quite insensitive to wide variations in loading conditions.

  18. SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM. (R827028)

    EPA Science Inventory

    A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme––the piecewise parabolic method (PPM)––for computing advective solution fields; a weight function capable o...

  19. SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM

    EPA Science Inventory

    A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme?the piecewise parabolic method (PPM)?for computing advective solution fields; a weight function capable of promoting grid node clustering ...

  20. SIMULATION OF A REACTING POLLUTANT PUFF USING AN ADAPTIVE GRID ALGORITHM

    EPA Science Inventory

    A new dynamic solution adaptive grid algorithm DSAGA-PPM, has been developed for use in air quality modeling. In this paper, this algorithm is described and evaluated with a test problem. Cone-shaped distributions of various chemical species undergoing chemical reactions are rota...

  1. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    PubMed

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  2. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    PubMed Central

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  3. Optimization algorithm of digital watermarking anti-coalition attacks in DWT-domain based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Que, Dashun; Li, Gang; Yue, Peng

    2007-12-01

    An adaptive optimization watermarking algorithm based on Genetic Algorithm (GA) and discrete wavelet transform (DWT) is proposed in this paper. The core of this algorithm is the fitness function optimization model for digital watermarking based on GA. The embedding intensity for digital watermarking can be modified adaptively, and the algorithm can effectively ensure the imperceptibility of watermarking while the robustness is ensured. The optimization model research may provide a new idea for anti-coalition attacks of digital watermarking algorithm. The paper has fulfilled many experiments, including the embedding and extracting experiments of watermarking, the influence experiments by the weighting factor, the experiments of embedding same watermarking to the different cover image, the experiments of embedding different watermarking to the same cover image, the comparative analysis experiments between this optimization algorithm and human visual system (HVS) algorithm and etc. The simulation results and the further analysis show the effectiveness and advantage of the new algorithm, which also has versatility and expandability. And meanwhile it has better ability of anti-coalition attacks. Moreover, the robustness and security of watermarking algorithm are improved by scrambling transformation and chaotic encryption while preprocessing the watermarking.

  4. Parallelization of an Adaptive Multigrid Algorithm for Fast Solution of Finite Element Structural Problems

    SciTech Connect

    Crane, N K; Parsons, I D; Hjelmstad, K D

    2002-03-21

    Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.

  5. LAHS: A novel harmony search algorithm based on learning automata

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  6. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  7. Teaching AI Search Algorithms in a Web-Based Educational System

    ERIC Educational Resources Information Center

    Grivokostopoulou, Foteini; Hatzilygeroudis, Ioannis

    2013-01-01

    In this paper, we present a way of teaching AI search algorithms in a web-based adaptive educational system. Teaching is based on interactive examples and exercises. Interactive examples, which use visualized animations to present AI search algorithms in a step-by-step way with explanations, are used to make learning more attractive. Practice…

  8. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    NASA Astrophysics Data System (ADS)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  9. Optimization and vibration suppression of adaptive composite panels using genetic algorithm and disturbance observer technique

    NASA Astrophysics Data System (ADS)

    Yan, Su; Ghasemi-Nejhad, Mehrdad N.

    2003-07-01

    In this paper, a model of the adaptive composite panel surfaces with piezoelectric patches is built using the Rayleigh-Ritz method based on the laminate theory. The interia and stiffness of the actuators are considered in the developed model. An optimal actuator location has been proved to be desirable since the piezoelectric actuators often have limitations of delivering large power oiutputs. Due to its effectiveness in seraching optimal design parameters and obtaining globally optimal solutions, the genetic algorithm has been applied to find optimal locations of piezoelectric actuators for the vibration control of a smart composite beam. In addition, the effects of population size, the crossover probability, and the mutation probability on the convergence of the genetic algorithm are investigated. Meanwhile, linear quadric regulator (LQR) and disturbance observer (DOB) are employed for the vibration suppression of the optimized adaptive composite beam (ACB). The experimental results show the robustness of the DOB, which can successfully suppress the vibrations of the cantilevered ACB according to the optimization results in an uncertain system.

  10. Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min

    2015-12-01

    In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.

  11. EEG/ERP adaptive noise canceller design with controlled search space (CSS) approach in cuckoo and other optimization algorithms.

    PubMed

    Ahirwal, M K; Kumar, Anil; Singh, G K

    2013-01-01

    This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.

  12. Adaptive polarimetric sensing for optimum radar signature classification using a genetic search algorithm.

    PubMed

    Sadjadi, Firooz A

    2006-08-01

    An automated technique for adaptive radar polarimetric pattern classification is described. The approach is based on a genetic algorithm that uses a probabilistic pattern separation distance function and searches for those transmit and receive states of polarization sensing angles that optimize this function. Seven pattern separation distance functions--the Rayleigh quotient, the Bhattacharyya, divergence, Kolmogorov, Matusta, Kullback-Leibler distances, and the Bayesian probability of error--are used on real, fully polarimetric synthetic aperture radar target signatures. Each of these signatures is represented as functions of transmit and receive polarization ellipticity angles and the angle of polarization ellipse. The results indicate that, based on the majority of the distance functions used, there is a unique set of state of polarization angles whose use will lead to improved classification performance.

  13. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2009-01-01

    This paper is devoted to robust, Predictor-based Model Reference Adaptive Control (PMRAC) design. The proposed adaptive system is compared with the now-classical Model Reference Adaptive Control (MRAC) architecture. Simulation examples are presented. Numerical evidence indicates that the proposed PMRAC tracking architecture has better than MRAC transient characteristics. In this paper, we presented a state-predictor based direct adaptive tracking design methodology for multi-input dynamical systems, with partially known dynamics. Efficiency of the design was demonstrated using short period dynamics of an aircraft. Formal proof of the reported PMRAC benefits constitute future research and will be reported elsewhere.

  14. Simple and Effective Algorithms: Computer-Adaptive Testing.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…

  15. MARGA: multispectral adaptive region growing algorithm for brain extraction on axial MRI.

    PubMed

    Roura, Eloy; Oliver, Arnau; Cabezas, Mariano; Vilanova, Joan C; Rovira, Alex; Ramió-Torrentà, Lluís; Lladó, Xavier

    2014-02-01

    Brain extraction, also known as skull stripping, is one of the most important preprocessing steps for many automatic brain image analysis. In this paper we present a new approach called Multispectral Adaptive Region Growing Algorithm (MARGA) to perform the skull stripping process. MARGA is based on a region growing (RG) algorithm which uses the complementary information provided by conventional magnetic resonance images (MRI) such as T1-weighted and T2-weighted to perform the brain segmentation. MARGA can be seen as an extension of the skull stripping method proposed by Park and Lee (2009) [1], enabling their use in both axial views and low quality images. Following the same idea, we first obtain seed regions that are then spread using a 2D RG algorithm which behaves differently in specific zones of the brain. This adaptation allows to deal with the fact that middle MRI slices have better image contrast between the brain and non-brain regions than superior and inferior brain slices where the contrast is smaller. MARGA is validated using three different databases: 10 simulated brains from the BrainWeb database; 2 data sets from the National Alliance for Medical Image Computing (NAMIC) database, the first one consisting in 10 normal brains and 10 brains of schizophrenic patients acquired with a 3T GE scanner, and the second one consisting in 5 brains from lupus patients acquired with a 3T Siemens scanner; and 10 brains of multiple sclerosis patients acquired with a 1.5T scanner. We have qualitatively and quantitatively compared MARGA with the well-known Brain Extraction Tool (BET), Brain Surface Extractor (BSE) and Statistical Parametric Mapping (SPM) approaches. The obtained results demonstrate the validity of MARGA, outperforming the results of those standard techniques. PMID:24380649

  16. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  17. Using adaptive genetic algorithms in the design of morphological filters in textural image processing

    NASA Astrophysics Data System (ADS)

    Li, Wei; Haese-Coat, Veronique; Ronsin, Joseph

    1996-03-01

    An adaptive GA scheme is adopted for the optimal morphological filter design problem. The adaptive crossover and mutation rate which make the GA avoid premature and at the same time assure convergence of the program are successfully used in optimal morphological filter design procedure. In the string coding step, each string (chromosome) is composed of a structuring element coding chain concatenated with a filter sequence coding chain. In decoding step, each string is divided into 3 chains which then are decoded respectively into one structuring element with a size inferior to 5 by 5 and two concatenating morphological filter operators. The fitness function in GA is based on the mean-square-error (MSE) criterion. In string selection step, a stochastic tournament procedure is used to replace the simple roulette wheel program in order to accelerate the convergence. The final convergence of our algorithm is reached by a two step converging strategy. In presented applications of noise removal from texture images, it is found that with the optimized morphological filter sequences, the obtained MSE values are smaller than those using corresponding non-adaptive morphological filters, and the optimized shapes and orientations of structuring elements take approximately the same shapes and orientations as those of the image textons.

  18. Adaptive control based on retrospective cost optimization

    NASA Technical Reports Server (NTRS)

    Santillo, Mario A. (Inventor); Bernstein, Dennis S. (Inventor)

    2012-01-01

    A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.

  19. CCD Base Line Subtraction Algorithms

    SciTech Connect

    Kotov, I.V.; OConnor, P.; Kotov, A.; Frank, J.; Perevoztchikov, V.; Takacs, P.

    2010-06-28

    High statistics astronomical surveys require photometric accuracy on a few percent level. The accuracy of sensor calibration procedures should match this goal. The first step in calibration procedures is the base line subtraction. The accuracy and robustness of different base line subtraction techniques used for Charge Coupled Device (CCD) sensors are discussed.

  20. Robust fundamental frequency estimation in sustained vowels: Detailed algorithmic comparisons and information fusion with adaptive Kalman filtering

    PubMed Central

    Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.

    2014-01-01

    There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269

  1. Robust fundamental frequency estimation in sustained vowels: detailed algorithmic comparisons and information fusion with adaptive Kalman filtering.

    PubMed

    Tsanas, Athanasios; Zañartu, Matías; Little, Max A; Fox, Cynthia; Ramig, Lorraine O; Clifford, Gari D

    2014-05-01

    There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F(0)) of speech signals. This study examines ten F(0) estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F(0) in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F(0) estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F(0) estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F(0) estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F(0) estimation is required. PMID:24815269

  2. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  3. Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms

    SciTech Connect

    Bosl, W J

    2005-01-26

    The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis

  4. On solving the 3-D phase field equations by employing a parallel-adaptive mesh refinement (Para-AMR) algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Xiong, S. M.

    2015-05-01

    An algorithm comprising adaptive mesh refinement (AMR) and parallel (Para-) computing capabilities was developed to efficiently solve the coupled phase field equations in 3-D. The AMR was achieved based on a gradient criterion and the point clustering algorithm introduced by Berger (1991). To reduce the time for mesh generation, a dynamic regridding approach was developed based on the magnitude of the maximum phase advancing velocity. Local data at each computing process was then constructed and parallel computation was realized based on the hierarchical grid structure created during the AMR. Numerical tests and simulations on single and multi-dendrite growth were performed and results show that the proposed algorithm could shorten the computing time for 3-D phase field simulation for about two orders of magnitude and enable one to gain much more insight in understanding the underlying physics during dendrite growth in solidification.

  5. Adaptive merit function in SPGD algorithm for beam combining

    NASA Astrophysics Data System (ADS)

    Yang, Guo-qing; Liu, Li-sheng; Jiang, Zhen-hua; Wang, Ting-feng; Guo, Jin

    2016-09-01

    The beam pointing is the most crucial issue for beam combining to achieve high energy laser output. In order to meet the turbulence situation, a beam pointing method that cooperates with the stochastic parallel gradient descent (SPGD) algorithm is proposed. The power-in-the-bucket ( PIB) is chosen as the merit function, and its radius changes gradually during the correction process. The linear radius and the exponential radius are simulated. The results show that the exponential radius has great promise for beam pointing.

  6. Chromatic adaptation-based tone reproduction for high-dynamic-range imaging

    NASA Astrophysics Data System (ADS)

    Lee, Joohyun; Jeon, Gwanggil; Jeong, Jechang

    2009-10-01

    We present an adaptive tone reproduction algorithm for displaying high-dynamic-range (HDR) images on conventional low-dynamic-range (LDR) display devices. The proposed algorithm consists of an adaptive tone reproduction operator and chromatic adaptation. The algorithm for dynamic range reduction relies on suitable tone reproduction functions that depend on histogram-based parameter estimation to adjust luminance according to global and local features. Instead of relying only on reduction of dynamic range, this chromatic adaption technique also preserves chromatic appearance and color consistency across scene and display environments. Our experimental results demonstrate that the proposed algorithm achieves good subjective quality while preserving image details. Furthermore, the proposed algorithm is simple and practical for implementation.

  7. Adaptive RSOV filter using the FELMS algorithm for nonlinear active noise control systems

    NASA Astrophysics Data System (ADS)

    Zhao, Haiquan; Zeng, Xiangping; He, Zhengyou; Li, Tianrui

    2013-01-01

    This paper presents a recursive second-order Volterra (RSOV) filter to solve the problems of signal saturation and other nonlinear distortions that occur in nonlinear active noise control systems (NANC) used for actual applications. Since this nonlinear filter based on an infinite impulse response (IIR) filter structure can model higher than second-order and third-order nonlinearities for systems where the nonlinearities are harmonically related, the RSOV filter is more effective in NANC systems with either a linear secondary path (LSP) or a nonlinear secondary path (NSP). Simulation results clearly show that the RSOV adaptive filter using the multichannel structure filtered-error least mean square (FELMS) algorithm can further greatly reduce the computational burdens and is more suitable to eliminate nonlinear distortions in NANC systems than a SOV filter, a bilinear filter and a third-order Volterra (TOV) filter.

  8. Application of an adaptive blade control algorithm to a gust alleviation system

    NASA Technical Reports Server (NTRS)

    Saito, S.

    1984-01-01

    The feasibility of an adaptive control system designed to alleviate helicopter gust induced vibration was analytically investigated for an articulated rotor system. This control system is based on discrete optimal control theory, and is composed of a set of measurements (oscillatory hub forces and moments), an identification system using a Kalman filter, a control system based on the minimization of the quadratic performance function, and a simulation system of the helicopter rotor. The gust models are step and sinusoidal vertical gusts. Control inputs are selected at the gust frequency, subharmonic frequency, and superharmonic frequency, and are superimposed on the basic collective and cyclic control inputs. The response to be reduced is selected to be that at the gust frequency because this is the dominant response compared with sub- and superharmonics. Numerical calculations show that the adaptive blade pitch control algorithm satisfactorily alleviates the hub gust response. Almost 100 percent reduction of the perturbation thrust response to a step gust and more than 50 percent reduction to a sinusoidal gust are achieved in the numerical simulations.

  9. Application of an adaptive blade control algorithm to a gust alleviation system

    NASA Technical Reports Server (NTRS)

    Saito, S.

    1983-01-01

    The feasibility of an adaptive control system designed to alleviate helicopter gust induced vibration was analytically investigated for an articulated rotor system. This control system is based on discrete optimal control theory, and is composed of a set of measurements (oscillatory hub forces and moments), an identification system using a Kalman filter, a control system based on the minimization of the quadratic performance function, and a simulation system of the helicopter rotor. The gust models are step and sinusoidal vertical gusts. Control inputs are selected at the gust frequency, subharmonic frequency, and superharmonic frequency, and are superimposed on the basic collective and cyclic control inputs. The response to be reduced is selected to be that at the gust frequency because this is the dominant response compared with sub- and superharmonics. Numerical calculations show that the adaptive blade pitch control algorithm satisfactorily alleviates the hub gust response. Almost 100% reduction of the perturbation thrust response to a step gust and more than 50% reduction to a sinusoidal gust are achieved in the numerical simulations.

  10. Secure OFDM communications based on hashing algorithms

    NASA Astrophysics Data System (ADS)

    Neri, Alessandro; Campisi, Patrizio; Blasi, Daniele

    2007-10-01

    In this paper we propose an OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system that introduces mutual authentication and encryption at the physical layer, without impairing spectral efficiency, exploiting some freedom degrees of the base-band signal, and using encrypted-hash algorithms. FEC (Forward Error Correction) is instead performed through variable-rate Turbo Codes. To avoid false rejections, i.e. rejections of enrolled (authorized) users, we designed and tested a robust hash algorithm. This robustness is obtained both by a segmentation of the hash domain (based on BCH codes) and by the FEC capabilities of Turbo Codes.

  11. An adaptive numeric predictor-corrector guidance algorithm for atmospheric entry vehicles

    NASA Astrophysics Data System (ADS)

    Spratlin, Kenneth Milton

    1987-05-01

    An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.

  12. Adaptation algorithms for satellite communication systems equipped with hybrid reflector antennas

    NASA Astrophysics Data System (ADS)

    Kartsan, I. N.; Zelenkov, P. V.; Tyapkin, V. N.; Dmitriev, D. D.; Goncharov, A. E.

    2015-10-01

    This paper reviews adaptation algorithms influenced by active interferences in satellite communication systems. A multi-beam antenna is suggested as an adaptive system; it is built on the basis of a hybrid reflector antenna with a 19-element array feed element, which incorporates a modified algorithm for radiation pattern synthesis used for suppressing targeted interferences. As a criterion for this synthesis, antenna gains are used at fixed points. As a result, the size of the objective function and time required for the synthesis can be significantly limited.

  13. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. PMID:27639239

  14. Comparison of adaptive algorithms for the control of tonal disturbances in mechanical systems

    NASA Astrophysics Data System (ADS)

    Zilletti, M.; Elliott, S. J.; Cheer, J.

    2016-09-01

    This paper presents a study on the performance of adaptive control algorithms designed to reduce the vibration of mechanical systems excited by a harmonic disturbance. The mechanical system consists of a mass suspended on a spring and a damper. The system is equipped with a force actuator in parallel with the suspension. The control signal driving the actuator is generated by adjusting the amplitude and phase of a sinusoidal reference signal at the same frequency as the excitation. An adaptive feedforward control algorithm is used to adapt the amplitude and phase of the control signal, to minimise the mean square velocity of the mass. Two adaptation strategies are considered in which the control signal is either updated after each period of the oscillation or at every time sample. The first strategy is traditionally used in vibration control in helicopters for example; the second strategy is normally referred to as the filtered-x least mean square algorithm and is often used to control engine noise in cars. The two adaptation strategies are compared through a parametric study, which investigates the influence of the properties of both the mechanical system and the control system on the convergence speed of the two algorithms.

  15. Adaptable state based control system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Dvorak, Daniel L. (Inventor); Gostelow, Kim P. (Inventor); Starbird, Thomas W. (Inventor); Gat, Erann (Inventor); Chien, Steve Ankuo (Inventor); Keller, Robert M. (Inventor)

    2004-01-01

    An autonomous controller, comprised of a state knowledge manager, a control executor, hardware proxies and a statistical estimator collaborates with a goal elaborator, with which it shares common models of the behavior of the system and the controller. The elaborator uses the common models to generate from temporally indeterminate sets of goals, executable goals to be executed by the controller. The controller may be updated to operate in a different system or environment than that for which it was originally designed by the replacement of shared statistical models and by the instantiation of a new set of state variable objects derived from a state variable class. The adaptation of the controller does not require substantial modification of the goal elaborator for its application to the new system or environment.

  16. Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Ianculescu, G. D.; Klop, J. J.

    1992-01-01

    Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom are designed using a continuous rigid body model of the solar array gimbal assembly containing both linear and nonlinear dynamics due to various friction components. The robustness of the design solution is examined by performing a series of sensitivity analysis studies. Adaptive control strategies are examined in order to compensate for the unfavorable effect of static nonlinearities, such as dead-zone uncertainties.

  17. Robust image transmission using a new joint source channel coding algorithm and dual adaptive OFDM

    NASA Astrophysics Data System (ADS)

    Farshchian, Masoud; Cho, Sungdae; Pearlman, William A.

    2004-01-01

    In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequal error protection is presented. Under a block budget constraint, the image bitstream is de-multiplexed into two classes with different error responses. The algorithm assigns unequal error protection (UEP) in a way to minimize the expected mean square error (MSE) at the receiver while minimizing the probability of catastrophic failure. In order to minimize the expected mean square error at the receiver, the algorithm assigns unequal protection to the value bit class (VBC) stream. In order to minimizes the probability of catastrophic error which is a characteristic of progressive image coders, the algorithm assigns more protection to the location bit class (LBC) stream than the VBC stream. Besides having the advantage of being analytical and also numerically solvable, the algorithm is based on a new formula developed to estimate the distortion rate (D-R) curve for the VBC portion of SPIHT. The major advantage of our technique is that the worst case instantaneous minimum peak signal to noise ratio (PSNR) does not differ greatly from the averge MSE while this is not the case for the optimal single stream (UEP) system. Although both average PSNR of our method and the optimal single stream UEP are about the same, our scheme does not suffer erratic behavior because we have made the probability of catastrophic error arbitarily small. The coded image is sent via orthogonal frequency division multiplexing (OFDM) which is a known and increasing popular modulation scheme to combat ISI (Inter Symbol Interference) and impulsive noise. Using dual adaptive energy OFDM, we use the minimum energy necessary to send each bit stream at a

  18. Voronoi-based localisation algorithm for mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Guan, Zixiao; Zhang, Yongtao; Zhang, Baihai; Dong, Lijing

    2016-11-01

    Localisation is an essential and important part in wireless sensor networks (WSNs). Many applications require location information. So far, there are less researchers studying on mobile sensor networks (MSNs) than static sensor networks (SSNs). However, MSNs are required in more and more areas such that the number of anchor nodes can be reduced and the location accuracy can be improved. In this paper, we firstly propose a range-free Voronoi-based Monte Carlo localisation algorithm (VMCL) for MSNs. We improve the localisation accuracy by making better use of the information that a sensor node gathers. Then, we propose an optimal region selection strategy of Voronoi diagram based on VMCL, called ORSS-VMCL, to increase the efficiency and accuracy for VMCL by adapting the size of Voronoi area during the filtering process. Simulation results show that the accuracy of these two algorithms, especially ORSS-VMCL, outperforms traditional MCL.

  19. Evaluation of an adaptive filtering algorithm for CT cardiac imaging with EKG modulated tube current

    NASA Astrophysics Data System (ADS)

    Li, Jianying; Hsieh, Jiang; Mohr, Kelly; Okerlund, Darin

    2005-04-01

    We have developed an adaptive filtering algorithm for cardiac CT scans with EKG-modulated tube current to optimize resolution and noise for different cardiac phases and to provide safety net for cases where end-systole phase is used for coronary imaging. This algorithm has been evaluated using patient cardiac CT scans where lower tube currents are used for the systolic phases. In this paper, we present the evaluation results. The results demonstrated that with the use of the proposed algorithm, we could improve image quality for all cardiac phases, while providing greater noise and streak artifact reduction for systole phases where lower CT dose were used.

  20. Adaptive randomized algorithms for analysis and design of control systems under uncertain environments

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia

    2015-05-01

    We consider the general problem of analysis and design of control systems in the presence of uncertainties. We treat uncertainties that affect a control system as random variables. The performance of the system is measured by the expectation of some derived random variables, which are typically bounded. We develop adaptive sequential randomized algorithms for estimating and optimizing the expectation of such bounded random variables with guaranteed accuracy and confidence level. These algorithms can be applied to overcome the conservatism and computational complexity in the analysis and design of controllers to be used in uncertain environments. We develop methods for investigating the optimality and computational complexity of such algorithms.

  1. Adaptive algorithm for active control of high-amplitude acoustic field in resonator

    NASA Astrophysics Data System (ADS)

    Červenka, M.; Bednařík, M.; Koníček, P.

    2008-06-01

    This work is concerned with suppression of nonlinear effects in piston-driven acoustic resonators by means of two-frequency driving technique. An iterative adaptive algorithm is proposed to calculate parameters of the driving signal in order that amplitude of the second harmonics of the acoustic pressure is minimized. Functionality of the algorithm is verified firstly by means of numerical model and secondly, it is used in real computer-controlled experiment. The numerical and experimental results show that the proposed algorithm can be successfully used for generation of high-amplitude shock-free acoustic field in resonators.

  2. A high precision phase reconstruction algorithm for multi-laser guide stars adaptive optics

    NASA Astrophysics Data System (ADS)

    He, Bin; Hu, Li-Fa; Li, Da-Yu; Xu, Huan-Yu; Zhang, Xing-Yun; Wang, Shao-Xin; Wang, Yu-Kun; Yang, Cheng-Liang; Cao, Zhao-Liang; Mu, Quan-Quan; Lu, Xing-Hai; Xuan, Li

    2016-09-01

    Adaptive optics (AO) systems are widespread and considered as an essential part of any large aperture telescope for obtaining a high resolution imaging at present. To enlarge the imaging field of view (FOV), multi-laser guide stars (LGSs) are currently being investigated and used for the large aperture optical telescopes. LGS measurement is necessary and pivotal to obtain the cumulative phase distortion along a target in the multi-LGSs AO system. We propose a high precision phase reconstruction algorithm to estimate the phase for a target with an uncertain turbulence profile based on the interpolation. By comparing with the conventional average method, the proposed method reduces the root mean square (RMS) error from 130 nm to 85 nm with a 30% reduction for narrow FOV. We confirm that such phase reconstruction algorithm is validated for both narrow field AO and wide field AO. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174274, 11174279, 61205021, 11204299, 61475152, and 61405194) and State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences.

  3. Refinement and evaluation of helicopter real-time self-adaptive active vibration controller algorithms

    NASA Technical Reports Server (NTRS)

    Davis, M. W.

    1984-01-01

    A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.

  4. A high precision phase reconstruction algorithm for multi-laser guide stars adaptive optics

    NASA Astrophysics Data System (ADS)

    He, Bin; Hu, Li-Fa; Li, Da-Yu; Xu, Huan-Yu; Zhang, Xing-Yun; Wang, Shao-Xin; Wang, Yu-Kun; Yang, Cheng-Liang; Cao, Zhao-Liang; Mu, Quan-Quan; Lu, Xing-Hai; Xuan, Li

    2016-09-01

    Adaptive optics (AO) systems are widespread and considered as an essential part of any large aperture telescope for obtaining a high resolution imaging at present. To enlarge the imaging field of view (FOV), multi-laser guide stars (LGSs) are currently being investigated and used for the large aperture optical telescopes. LGS measurement is necessary and pivotal to obtain the cumulative phase distortion along a target in the multi-LGSs AO system. We propose a high precision phase reconstruction algorithm to estimate the phase for a target with an uncertain turbulence profile based on the interpolation. By comparing with the conventional average method, the proposed method reduces the root mean square (RMS) error from 130 nm to 85 nm with a 30% reduction for narrow FOV. We confirm that such phase reconstruction algorithm is validated for both narrow field AO and wide field AO. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174274, 11174279, 61205021, 11204299, 61475152, and 61405194) and State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences.

  5. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    PubMed Central

    Subhi Al-batah, Mohammad; Mat Isa, Nor Ashidi; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  6. Multiple adaptive neuro-fuzzy inference system with automatic features extraction algorithm for cervical cancer recognition.

    PubMed

    Al-batah, Mohammad Subhi; Isa, Nor Ashidi Mat; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  7. An Adaptive Weighting Algorithm for Interpolating the Soil Potassium Content

    PubMed Central

    Liu, Wei; Du, Peijun; Zhao, Zhuowen; Zhang, Lianpeng

    2016-01-01

    The concept of spatial interpolation is important in the soil sciences. However, the use of a single global interpolation model is often limited by certain conditions (e.g., terrain complexity), which leads to distorted interpolation results. Here we present a method of adaptive weighting combined environmental variables for soil properties interpolation (AW-SP) to improve accuracy. Using various environmental variables, AW-SP was used to interpolate soil potassium content in Qinghai Lake Basin. To evaluate AW-SP performance, we compared it with that of inverse distance weighting (IDW), ordinary kriging, and OK combined with different environmental variables. The experimental results showed that the methods combined with environmental variables did not always improve prediction accuracy even if there was a strong correlation between the soil properties and environmental variables. However, compared with IDW, OK, and OK combined with different environmental variables, AW-SP is more stable and has lower mean absolute and root mean square errors. Furthermore, the AW-SP maps provided improved details of soil potassium content and provided clearer boundaries to its spatial distribution. In conclusion, AW-SP can not only reduce prediction errors, it also accounts for the distribution and contributions of environmental variables, making the spatial interpolation of soil potassium content more reasonable. PMID:27051998

  8. An Adaptive Weighting Algorithm for Interpolating the Soil Potassium Content

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Du, Peijun; Zhao, Zhuowen; Zhang, Lianpeng

    2016-04-01

    The concept of spatial interpolation is important in the soil sciences. However, the use of a single global interpolation model is often limited by certain conditions (e.g., terrain complexity), which leads to distorted interpolation results. Here we present a method of adaptive weighting combined environmental variables for soil properties interpolation (AW-SP) to improve accuracy. Using various environmental variables, AW-SP was used to interpolate soil potassium content in Qinghai Lake Basin. To evaluate AW-SP performance, we compared it with that of inverse distance weighting (IDW), ordinary kriging, and OK combined with different environmental variables. The experimental results showed that the methods combined with environmental variables did not always improve prediction accuracy even if there was a strong correlation between the soil properties and environmental variables. However, compared with IDW, OK, and OK combined with different environmental variables, AW-SP is more stable and has lower mean absolute and root mean square errors. Furthermore, the AW-SP maps provided improved details of soil potassium content and provided clearer boundaries to its spatial distribution. In conclusion, AW-SP can not only reduce prediction errors, it also accounts for the distribution and contributions of environmental variables, making the spatial interpolation of soil potassium content more reasonable.

  9. Adaptive motion artifact reducing algorithm for wrist photoplethysmography application

    NASA Astrophysics Data System (ADS)

    Zhao, Jingwei; Wang, Guijin; Shi, Chenbo

    2016-04-01

    Photoplethysmography (PPG) technology is widely used in wearable heart pulse rate monitoring. It might reveal the potential risks of heart condition and cardiopulmonary function by detecting the cardiac rhythms in physical exercise. However the quality of wrist photoelectric signal is very sensitive to motion artifact since the thicker tissues and the fewer amount of capillaries. Therefore, motion artifact is the major factor that impede the heart rate measurement in the high intensity exercising. One accelerometer and three channels of light with different wavelengths are used in this research to analyze the coupled form of motion artifact. A novel approach is proposed to separate the pulse signal from motion artifact by exploiting their mixing ratio in different optical paths. There are four major steps of our method: preprocessing, motion artifact estimation, adaptive filtering and heart rate calculation. Five healthy young men are participated in the experiment. The speeder in the treadmill is configured as 12km/h, and all subjects would run for 3-10 minutes by swinging the arms naturally. The final result is compared with chest strap. The average of mean square error (MSE) is less than 3 beats per minute (BPM/min). Proposed method performed well in intense physical exercise and shows the great robustness to individuals with different running style and posture.

  10. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  11. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  12. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  13. Algorithmic Differentiation for Calculus-based Optimization

    NASA Astrophysics Data System (ADS)

    Walther, Andrea

    2010-10-01

    For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.

  14. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  15. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  16. STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations

    NASA Technical Reports Server (NTRS)

    Shah, S. N.

    1981-01-01

    The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.

  17. A Reliability-Based Track Fusion Algorithm

    PubMed Central

    Xu, Li; Pan, Liqiang; Jin, Shuilin; Liu, Haibo; Yin, Guisheng

    2015-01-01

    The common track fusion algorithms in multi-sensor systems have some defects, such as serious imbalances between accuracy and computational cost, the same treatment of all the sensor information regardless of their quality, high fusion errors at inflection points. To address these defects, a track fusion algorithm based on the reliability (TFR) is presented in multi-sensor and multi-target environments. To improve the information quality, outliers in the local tracks are eliminated at first. Then the reliability of local tracks is calculated, and the local tracks with high reliability are chosen for the state estimation fusion. In contrast to the existing methods, TFR reduces high fusion errors at the inflection points of system tracks, and obtains a high accuracy with less computational cost. Simulation results verify the effectiveness and the superiority of the algorithm in dense sensor environments. PMID:25950174

  18. A reliability-based track fusion algorithm.

    PubMed

    Xu, Li; Pan, Liqiang; Jin, Shuilin; Liu, Haibo; Yin, Guisheng

    2015-01-01

    The common track fusion algorithms in multi-sensor systems have some defects, such as serious imbalances between accuracy and computational cost, the same treatment of all the sensor information regardless of their quality, high fusion errors at inflection points. To address these defects, a track fusion algorithm based on the reliability (TFR) is presented in multi-sensor and multi-target environments. To improve the information quality, outliers in the local tracks are eliminated at first. Then the reliability of local tracks is calculated, and the local tracks with high reliability are chosen for the state estimation fusion. In contrast to the existing methods, TFR reduces high fusion errors at the inflection points of system tracks, and obtains a high accuracy with less computational cost. Simulation results verify the effectiveness and the superiority of the algorithm in dense sensor environments.

  19. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  20. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  1. Adaptive skin detection based on online training

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Tang, Liang; Zhou, Jie; Rong, Gang

    2007-11-01

    Skin is a widely used cue for porn image classification. Most conventional methods are off-line training schemes. They usually use a fixed boundary to segment skin regions in the images and are effective only in restricted conditions: e.g. good lightness and unique human race. This paper presents an adaptive online training scheme for skin detection which can handle these tough cases. In our approach, skin detection is considered as a classification problem on Gaussian mixture model. For each image, human face is detected and the face color is used to establish a primary estimation of skin color distribution. Then an adaptive online training algorithm is used to find the real boundary between skin color and background color in current image. Experimental results on 450 images showed that the proposed method is more robust in general situations than the conventional ones.

  2. Identification of robust adaptation gene regulatory network parameters using an improved particle swarm optimization algorithm.

    PubMed

    Huang, X N; Ren, H P

    2016-01-01

    Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation. PMID:27323043

  3. Design of scheduling and rate-adaptation algorithms for adaptive HTTP streaming

    NASA Astrophysics Data System (ADS)

    Hesse, Stephan

    2013-09-01

    In adaptive HTTP streaming model, the HTTP server stores multiple representations of media content, encoded at different rates. It is the function of a streaming client to select and retrieve segments of appropriate representations to enable continuous media playback under varying network conditions. In this paper we describe design of a control mechanism enabling such a selection and retrieval of media data during streaming session. We also describe the architecture of a streaming client for adaptive HTTP streaming and provide simulation data illustrating the effectiveness of the proposed control mechanism for handling bandwidth fluctuations typical for TCP traffic.

  4. Adaptive switching detection algorithm for iterative-MIMO systems to enable power savings

    NASA Astrophysics Data System (ADS)

    Tadza, N.; Laurenson, D.; Thompson, J. S.

    2014-11-01

    This paper attempts to tackle one of the challenges faced in soft input soft output Multiple Input Multiple Output (MIMO) detection systems, which is to achieve optimal error rate performance with minimal power consumption. This is realized by proposing a new algorithm design that comprises multiple thresholds within the detector that, in real time, specify the receiver behavior according to the current channel in both slow and fast fading conditions, giving it adaptivity. This adaptivity enables energy savings within the system since the receiver chooses whether to accept or to reject the transmission, according to the success rate of detecting thresholds. The thresholds are calculated using the mutual information of the instantaneous channel conditions between the transmitting and receiving antennas of iterative-MIMO systems. In addition, the power saving technique, Dynamic Voltage and Frequency Scaling, helps to reduce the circuit power demands of the adaptive algorithm. This adaptivity has the potential to save up to 30% of the total energy when it is implemented on Xilinx®Virtex-5 simulation hardware. Results indicate the benefits of having this "intelligence" in the adaptive algorithm due to the promising performance-complexity tradeoff parameters in both software and hardware codesign simulation.

  5. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  6. A high fuel consumption efficiency management scheme for PHEVs using an adaptive genetic algorithm.

    PubMed

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day.

  7. A dynamically adaptive multigrid algorithm for the incompressible Navier-Stokes equations: Validation and model problems

    NASA Technical Reports Server (NTRS)

    Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.

    1991-01-01

    An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.

  8. Fast Algorithms for Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  9. LAMBDAR: Lambda Adaptive Multi-Band Deblending Algorithm in R

    NASA Astrophysics Data System (ADS)

    Wright, Angus H.

    2016-04-01

    LAMBDAR measures galaxy fluxes from an arbitrary FITS image, covering an arbitrary photometric wave-band, when provided all parameters needed to construct galactic apertures at the required locations for multi-band matched aperture galactic photometry. Through sophisticated matched aperture photometry, the package develops robust Spectral Energy Distributions (SEDs) and accurately establishes the physical properties of galactic objects. LAMBDAR was based on a package detailed in Bourne et al. (2012) that determined galactic fluxes in low resolution Herschel images.

  10. Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm.

    PubMed

    Ffrench, P A; Zeidler, J H; Ku, W H

    1997-01-01

    Two-dimensional (2-D) adaptive filtering is a technique that can be applied to many image processing applications. This paper will focus on the development of an improved 2-D adaptive lattice algorithm (2-D AL) and its application to the removal of correlated clutter to enhance the detectability of small objects in images. The two improvements proposed here are increased flexibility in the calculation of the reflection coefficients and a 2-D method to update the correlations used in the 2-D AL algorithm. The 2-D AL algorithm is shown to predict correlated clutter in image data and the resulting filter is compared with an ideal Wiener-Hopf filter. The results of the clutter removal will be compared to previously published ones for a 2-D least mean square (LMS) algorithm. 2-D AL is better able to predict spatially varying clutter than the 2-D LMS algorithm, since it converges faster to new image properties. Examples of these improvements are shown for a spatially varying 2-D sinusoid in white noise and simulated clouds. The 2-D LMS and 2-D AL algorithms are also shown to enhance a mammogram image for the detection of small microcalcifications and stellate lesions.

  11. A correction algorithm to simultaneously control dual deformable mirrors in a woofer-tweeter adaptive optics system

    PubMed Central

    Li, Chaohong; Sredar, Nripun; Ivers, Kevin M.; Queener, Hope; Porter, Jason

    2010-01-01

    We present a direct slope-based correction algorithm to simultaneously control two deformable mirrors (DMs) in a woofer-tweeter adaptive optics system. A global response matrix was derived from the response matrices of each deformable mirror and the voltages for both deformable mirrors were calculated simultaneously. This control algorithm was tested and compared with a 2-step sequential control method in five normal human eyes using an adaptive optics scanning laser ophthalmoscope. The mean residual total root-mean-square (RMS) wavefront errors across subjects after adaptive optics (AO) correction were 0.128 ± 0.025 μm and 0.107 ± 0.033 μm for simultaneous and 2-step control, respectively (7.75-mm pupil). The mean intensity of reflectance images acquired after AO convergence was slightly higher for 2-step control. Radially-averaged power spectra calculated from registered reflectance images were nearly identical for all subjects using simultaneous or 2-step control. The correction performance of our new simultaneous dual DM control algorithm is comparable to 2-step control, but is more efficient. This method can be applied to any woofer-tweeter AO system. PMID:20721058

  12. Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm

    PubMed Central

    Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein

    2015-01-01

    In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L2 norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods. PMID:26955565

  13. A framework for evaluating wavelet based watermarking for scalable coded digital item adaptation attacks

    NASA Astrophysics Data System (ADS)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2009-02-01

    A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21 part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single generalised framework by considering a global parameter space, from which the optimum parameters for a specific algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content. The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.

  14. LSB Based Quantum Image Steganography Algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  15. Network-based recommendation algorithms: A review

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš

    2016-06-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.

  16. Automated Vectorization of Decision-Based Algorithms

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  17. Hybrid and adaptive meta-model-based global optimization

    NASA Astrophysics Data System (ADS)

    Gu, J.; Li, G. Y.; Dong, Z.

    2012-01-01

    As an efficient and robust technique for global optimization, meta-model-based search methods have been increasingly used in solving complex and computation intensive design optimization problems. In this work, a hybrid and adaptive meta-model-based global optimization method that can automatically select appropriate meta-modelling techniques during the search process to improve search efficiency is introduced. The search initially applies three representative meta-models concurrently. Progress towards a better performing model is then introduced by selecting sample data points adaptively according to the calculated values of the three meta-models to improve modelling accuracy and search efficiency. To demonstrate the superior performance of the new algorithm over existing search methods, the new method is tested using various benchmark global optimization problems and applied to a real industrial design optimization example involving vehicle crash simulation. The method is particularly suitable for design problems involving computation intensive, black-box analyses and simulations.

  18. Genetic algorithm approach for adaptive power and subcarrier allocation in multi-user OFDM systems

    NASA Astrophysics Data System (ADS)

    Reddy, Y. B.; Naraghi-Pour, Mort

    2007-04-01

    In this paper, a novel genetic algorithm application is proposed for adaptive power and subcarrier allocation in multi-user Orthogonal Frequency Division Multiplexing (OFDM) systems. To test the application, a simple genetic algorithm was implemented in MATLAB language. With the goal of minimizing the overall transmit power while ensuring the fulfillment of each user's rate and bit error rate (BER) requirements, the proposed algorithm acquires the needed allocation through genetic search. The simulations were tested for BER 0.1 to 0.00001, data rate of 256 bit per OFDM block and chromosome length of 128. The results show that genetic algorithm outperforms the results in [3] in subcarrier allocation. The convergence of GA model with 8 users and 128 subcarriers performs better in power requirement compared to that in [4] but converges more slowly.

  19. Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1981-01-01

    A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.

  20. A Fast, Locally Adaptive, Interactive Retrieval Algorithm for the Analysis of DIAL Measurements

    NASA Astrophysics Data System (ADS)

    Samarov, D. V.; Rogers, R.; Hair, J. W.; Douglass, K. O.; Plusquellic, D.

    2010-12-01

    Differential absorption light detection and ranging (DIAL) is a laser-based tool which is used for remote, range-resolved measurement of particular gases in the atmosphere, such as carbon-dioxide and methane. In many instances it is of interest to study how these gases are distributed over a region such as a landfill, factory, or farm. While a single DIAL measurement only tells us about the distribution of a gas along a single path, a sequence of consecutive measurements provides us with information on how that gas is distributed over a region, making DIAL a natural choice for such studies. DIAL measurements present a number of interesting challenges; first, in order to convert the raw data to concentration it is necessary to estimate the derivative along the path of the measurement. Second, as the distribution of gases across a region can be highly heterogeneous it is important that the spatial nature of the measurements be taken into account. Finally, since it is common for the set of collected measurements to be quite large it is important for the method to be computationally efficient. Existing work based on Local Polynomial Regression (LPR) has been developed which addresses the first two issues, but the issue of computational speed remains an open problem. In addition to the latter, another desirable property is to allow user input into the algorithm. In this talk we present a novel method based on LPR which utilizes a variant of the RODEO algorithm to provide a fast, locally adaptive and interactive approach to the analysis of DIAL measurements. This methodology is motivated by and applied to several simulated examples and a study out of NASA Langley Research Center (LaRC) looking at the estimation of aerosol extinction in the atmosphere. A comparison study of our method against several other algorithms is also presented. References Chaudhuri, P., Marron, J.S., Scale-space view of curve estimation, Annals of Statistics 28 (2000) 408-428. Duong, T., Cowling

  1. A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment

    NASA Technical Reports Server (NTRS)

    Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott

    1995-01-01

    The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.

  2. New communication schemes based on adaptive synchronization

    NASA Astrophysics Data System (ADS)

    Yu, Wenwu; Cao, Jinde; Wong, Kwok-Wo; Lü, Jinhu

    2007-09-01

    In this paper, adaptive synchronization with unknown parameters is discussed for a unified chaotic system by using the Lyapunov method and the adaptive control approach. Some communication schemes, including chaotic masking, chaotic modulation, and chaotic shift key strategies, are then proposed based on the modified adaptive method. The transmitted signal is masked by chaotic signal or modulated into the system, which effectively blurs the constructed return map and can resist this return map attack. The driving system with unknown parameters and functions is almost completely unknown to the attackers, so it is more secure to apply this method into the communication. Finally, some simulation examples based on the proposed communication schemes and some cryptanalysis works are also given to verify the theoretical analysis in this paper.

  3. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  4. Image enhancement based on edge boosting algorithm

    NASA Astrophysics Data System (ADS)

    Ngernplubpla, Jaturon; Chitsobhuk, Orachat

    2015-12-01

    In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.

  5. A Novel Method for Optimum Global Positioning System Satellite Selection Based on a Modified Genetic Algorithm.

    PubMed

    Song, Jiancai; Xue, Guixiang; Kang, Yanan

    2016-01-01

    In this paper, a novel method for selecting a navigation satellite subset for a global positioning system (GPS) based on a genetic algorithm is presented. This approach is based on minimizing the factors in the geometric dilution of precision (GDOP) using a modified genetic algorithm (MGA) with an elite conservation strategy, adaptive selection, adaptive mutation, and a hybrid genetic algorithm that can select a subset of the satellites represented by specific numbers in the interval (4 ∼ n) while maintaining position accuracy. A comprehensive simulation demonstrates that the MGA-based satellite selection method effectively selects the correct number of optimal satellite subsets using receiver autonomous integrity monitoring (RAIM) or fault detection and exclusion (FDE). This method is more adaptable and flexible for GPS receivers, particularly for those used in handset equipment and mobile phones. PMID:26943638

  6. A Novel Method for Optimum Global Positioning System Satellite Selection Based on a Modified Genetic Algorithm

    PubMed Central

    Song, Jiancai; Xue, Guixiang; Kang, Yanan

    2016-01-01

    In this paper, a novel method for selecting a navigation satellite subset for a global positioning system (GPS) based on a genetic algorithm is presented. This approach is based on minimizing the factors in the geometric dilution of precision (GDOP) using a modified genetic algorithm (MGA) with an elite conservation strategy, adaptive selection, adaptive mutation, and a hybrid genetic algorithm that can select a subset of the satellites represented by specific numbers in the interval (4 ∼ n) while maintaining position accuracy. A comprehensive simulation demonstrates that the MGA-based satellite selection method effectively selects the correct number of optimal satellite subsets using receiver autonomous integrity monitoring (RAIM) or fault detection and exclusion (FDE). This method is more adaptable and flexible for GPS receivers, particularly for those used in handset equipment and mobile phones. PMID:26943638

  7. A Novel Method for Optimum Global Positioning System Satellite Selection Based on a Modified Genetic Algorithm.

    PubMed

    Song, Jiancai; Xue, Guixiang; Kang, Yanan

    2016-01-01

    In this paper, a novel method for selecting a navigation satellite subset for a global positioning system (GPS) based on a genetic algorithm is presented. This approach is based on minimizing the factors in the geometric dilution of precision (GDOP) using a modified genetic algorithm (MGA) with an elite conservation strategy, adaptive selection, adaptive mutation, and a hybrid genetic algorithm that can select a subset of the satellites represented by specific numbers in the interval (4 ∼ n) while maintaining position accuracy. A comprehensive simulation demonstrates that the MGA-based satellite selection method effectively selects the correct number of optimal satellite subsets using receiver autonomous integrity monitoring (RAIM) or fault detection and exclusion (FDE). This method is more adaptable and flexible for GPS receivers, particularly for those used in handset equipment and mobile phones.

  8. Schwarz-Based Algorithms for Compressible Flows

    NASA Technical Reports Server (NTRS)

    Tidriri, M. D.

    1996-01-01

    We investigate in this paper the application of Schwarz-based algorithms to compressible flows. First we study the combination of these methods with defect-correction procedures. We then study the effect on the Schwarz-based methods of replacing the explicit treatment of the boundary conditions by an implicit one. In the last part of this paper we study the combination of these methods with Newton-Krylov matrix-free methods. Numerical experiments that show the performance of our approaches are then presented.

  9. Anticipation versus adaptation in Evolutionary Algorithms: The case of Non-Stationary Clustering

    NASA Astrophysics Data System (ADS)

    González, A. I.; Graña, M.; D'Anjou, A.; Torrealdea, F. J.

    1998-07-01

    From the technological point of view is usually more important to ensure the ability to react promptly to changing environmental conditions than to try to forecast them. Evolution Algorithms were proposed initially to drive the adaptation of complex systems to varying or uncertain environments. In the general setting, the adaptive-anticipatory dilemma reduces itself to the placement of the interaction with the environment in the computational schema. Adaptation consists of the estimation of the proper parameters from present data in order to react to a present environment situation. Anticipation consists of the estimation from present data in order to react to a future environment situation. This duality is expressed in the Evolutionary Computation paradigm by the precise location of the consideration of present data in the computation of the individuals fitness function. In this paper we consider several instances of Evolutionary Algorithms applied to precise problem and perform an experiment that test their response as anticipative and adaptive mechanisms. The non stationary problem considered is that of Non Stationary Clustering, more precisely the adaptive Color Quantization of image sequences. The experiment illustrates our ideas and gives some quantitative results that may support the proposition of the Evolutionary Computation paradigm for other tasks that require the interaction with a Non-Stationary environment.

  10. DARAL: A Dynamic and Adaptive Routing Algorithm for Wireless Sensor Networks.

    PubMed

    Estévez, Francisco José; Glösekötter, Peter; González, Jesús

    2016-01-01

    The evolution of Smart City projects is pushing researchers and companies to develop more efficient embedded hardware and also more efficient communication technologies. These communication technologies are the focus of this work, presenting a new routing algorithm based on dynamically-allocated sub-networks and node roles. Among these features, our algorithm presents a fast set-up time, a reduced overhead and a hierarchical organization, which allows for the application of complex management techniques. This work presents a routing algorithm based on a dynamically-allocated hierarchical clustering, which uses the link quality indicator as a reference parameter, maximizing the network coverage and minimizing the control message overhead and the convergence time. The present work based its test scenario and analysis in the density measure, considered as a node degree. The routing algorithm is compared with some of the most well known routing algorithms for different scenario densities. PMID:27347962

  11. DARAL: A Dynamic and Adaptive Routing Algorithm for Wireless Sensor Networks

    PubMed Central

    Estévez, Francisco José; Glösekötter, Peter; González, Jesús

    2016-01-01

    The evolution of Smart City projects is pushing researchers and companies to develop more efficient embedded hardware and also more efficient communication technologies. These communication technologies are the focus of this work, presenting a new routing algorithm based on dynamically-allocated sub-networks and node roles. Among these features, our algorithm presents a fast set-up time, a reduced overhead and a hierarchical organization, which allows for the application of complex management techniques. This work presents a routing algorithm based on a dynamically-allocated hierarchical clustering, which uses the link quality indicator as a reference parameter, maximizing the network coverage and minimizing the control message overhead and the convergence time. The present work based its test scenario and analysis in the density measure, considered as a node degree. The routing algorithm is compared with some of the most well known routing algorithms for different scenario densities. PMID:27347962

  12. Automated DNA Base Pair Calling Algorithm

    1999-07-07

    The procedure solves the problem of calling the DNA base pair sequence from two channel electropherogram separations in an automated fashion. The core of the program involves a peak picking algorithm based upon first, second, and third derivative spectra for each electropherogram channel, signal levels as a function of time, peak spacing, base pair signal to noise sequence patterns, frequency vs ratio of the two channel histograms, and confidence levels generated during the run. Themore » ratios of the two channels at peak centers can be used to accurately and reproducibly determine the base pair sequence. A further enhancement is a novel Gaussian deconvolution used to determine the peak heights used in generating the ratio.« less

  13. Adaptive GOP structure based on motion coherence

    NASA Astrophysics Data System (ADS)

    Ma, Yanzhuo; Wan, Shuai; Chang, Yilin; Yang, Fuzheng; Wang, Xiaoyu

    2009-08-01

    Adaptive Group of Pictures (GOP) is helpful for increasing the efficiency of video encoding by taking account of characteristics of video content. This paper proposes a method for adaptive GOP structure selection for video encoding based on motion coherence, which extracts key frames according to motion acceleration, and assigns coding type for each key and non-key frame correspondingly. Motion deviation is then used instead of motion magnitude in the selection of the number of B frames. Experimental results show that the proposed method for adaptive GOP structure selection achieves performance gain of 0.2-1dB over the fixed GOP, and has the advantage of better transmission resilience. Moreover, this method can be used in real-time video coding due to its low complexity.

  14. Spatio-temporal adaptation algorithm for two-dimensional reacting flows

    NASA Astrophysics Data System (ADS)

    Pervaiz, Mehtab M.; Baron, Judson R.

    1988-01-01

    A spatio-temporal adaptive algorithm for solving the unsteady Euler equations with chemical source terms is presented. Quadrilateral cells are used in two spatial dimensions which allow for embedded meshes tracking moving flow features with spatially varying time-steps which are multiples of global minimum time-steps. Blast wave interactions corresponding to a perfect gas (frozen) and a Lighthill dissociating gas (nonequilibrium) are considered for circular arc cascade and 90 degree bend duct geometries.

  15. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  16. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  17. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGES

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  18. Spin-adapted density matrix renormalization group algorithms for quantum chemistry

    NASA Astrophysics Data System (ADS)

    Sharma, Sandeep; Chan, Garnet Kin-Lic

    2012-03-01

    We extend the spin-adapted density matrix renormalization group (DMRG) algorithm of McCulloch and Gulacsi [Europhys. Lett. 57, 852 (2002)], 10.1209/epl/i2002-00393-0 to quantum chemical Hamiltonians. This involves using a quasi-density matrix, to ensure that the renormalized DMRG states are eigenfunctions of hat{S}^2, and the Wigner-Eckart theorem, to reduce overall storage and computational costs. We argue that the spin-adapted DMRG algorithm is most advantageous for low spin states. Consequently, we also implement a singlet-embedding strategy due to Tatsuaki [Phys. Rev. E 61, 3199 (2000)], 10.1103/PhysRevE.61.3199 where we target high spin states as a component of a larger fictitious singlet system. Finally, we present an efficient algorithm to calculate one- and two-body reduced density matrices from the spin-adapted wavefunctions. We evaluate our developments with benchmark calculations on transition metal system active space models. These include the Fe2S2, [Fe2S2(SCH3)4]2-, and Cr2 systems. In the case of Fe2S2, the spin-ladder spacing is on the microHartree scale, and here we show that we can target such very closely spaced states. In [Fe2S2(SCH3)4]2-, we calculate particle and spin correlation functions, to examine the role of sulfur bridging orbitals in the electronic structure. In Cr2 we demonstrate that spin-adaptation with the Wigner-Eckart theorem and using singlet embedding can yield up to an order of magnitude increase in computational efficiency. Overall, these calculations demonstrate the potential of using spin-adaptation to extend the range of DMRG calculations in complex transition metal problems.

  19. Differential Search Algorithm Based Edge Detection

    NASA Astrophysics Data System (ADS)

    Gunen, M. A.; Civicioglu, P.; Beşdok, E.

    2016-06-01

    In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.

  20. [An adaptive scaling hybrid algorithm for reduction of CT artifacts caused by metal objects].

    PubMed

    Chen, Yu; Luo, Hai; Zhou, He-qin

    2009-03-01

    A new adaptively hybrid filtering algorithm is proposed to reduce the artifacts caused by metal in CT image. Firstly, the method is used to preprocess the projection data of metal region and is reconstruct by filtered back projection (FBP) method. Then the expectation maximization algorithm (EM) is performed on the iterative original metal project data. Finally, a compensating procedure is applied to the reconstructed metal region. The simulation result has demonstrated that the proposed algorithm can remove the metal artifacts and keep the structure information of metal object effectively. It ensures that the tissues around the metal will not be distorted. The method is also computational efficient and effective for the CT images which contains several metal objects.

  1. An adaptive immune optimization algorithm with dynamic lattice searching operation for fast optimization of atomic clusters

    NASA Astrophysics Data System (ADS)

    Wu, Xia; Wu, Genhua

    2014-08-01

    Geometrical optimization of atomic clusters is performed by a development of adaptive immune optimization algorithm (AIOA) with dynamic lattice searching (DLS) operation (AIOA-DLS method). By a cycle of construction and searching of the dynamic lattice (DL), DLS algorithm rapidly makes the clusters more regular and greatly reduces the potential energy. DLS can thus be used as an operation acting on the new individuals after mutation operation in AIOA to improve the performance of the AIOA. The AIOA-DLS method combines the merit of evolutionary algorithm and idea of dynamic lattice. The performance of the proposed method is investigated in the optimization of Lennard-Jones clusters within 250 atoms and silver clusters described by many-body Gupta potential within 150 atoms. Results reported in the literature are reproduced, and the motif of Ag61 cluster is found to be stacking-fault face-centered cubic, whose energy is lower than that of previously obtained icosahedron.

  2. A biomimetic adaptive algorithm and low-power architecture for implantable neural decoders.

    PubMed

    Rapoport, Benjamin I; Wattanapanitch, Woradorn; Penagos, Hector L; Musallam, Sam; Andersen, Richard A; Sarpeshkar, Rahul

    2009-01-01

    Algorithmically and energetically efficient computational architectures that operate in real time are essential for clinically useful neural prosthetic devices. Such devices decode raw neural data to obtain direct control signals for external devices. They can also perform data compression and vastly reduce the bandwidth and consequently power expended in wireless transmission of raw data from implantable brain-machine interfaces. We describe a biomimetic algorithm and micropower analog circuit architecture for decoding neural cell ensemble signals. The decoding algorithm implements a continuous-time artificial neural network, using a bank of adaptive linear filters with kernels that emulate synaptic dynamics. The filters transform neural signal inputs into control-parameter outputs, and can be tuned automatically in an on-line learning process. We provide experimental validation of our system using neural data from thalamic head-direction cells in an awake behaving rat.

  3. A Biomimetic Adaptive Algorithm and Low-Power Architecture for Implantable Neural Decoders

    PubMed Central

    Rapoport, Benjamin I.; Wattanapanitch, Woradorn; Penagos, Hector L.; Musallam, Sam; Andersen, Richard A.; Sarpeshkar, Rahul

    2010-01-01

    Algorithmically and energetically efficient computational architectures that operate in real time are essential for clinically useful neural prosthetic devices. Such devices decode raw neural data to obtain direct control signals for external devices. They can also perform data compression and vastly reduce the bandwidth and consequently power expended in wireless transmission of raw data from implantable brain-machine interfaces. We describe a biomimetic algorithm and micropower analog circuit architecture for decoding neural cell ensemble signals. The decoding algorithm implements a continuous-time artificial neural network, using a bank of adaptive linear filters with kernels that emulate synaptic dynamics. The filters transform neural signal inputs into control-parameter outputs, and can be tuned automatically in an on-line learning process. We provide experimental validation of our system using neural data from thalamic head-direction cells in an awake behaving rat. PMID:19964345

  4. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  5. An adaptive support driven reweighted L1-regularization algorithm for fluorescence molecular tomography.

    PubMed

    Shi, Junwei; Liu, Fei; Pu, Huangsheng; Zuo, Simin; Luo, Jianwen; Bai, Jing

    2014-11-01

    Fluorescence molecular tomography (FMT) is a promising in vivo functional imaging modality in preclinical study. When solving the ill-posed FMT inverse problem, L1 regularization can preserve the details and reduce the noise in the reconstruction results effectively. Moreover, compared with the regular L1 regularization, reweighted L1 regularization is recently reported to improve the performance. In order to realize the reweighted L1 regularization for FMT, an adaptive support driven reweighted L1-regularization (ASDR-L1) algorithm is proposed in this work. This algorithm has two integral parts: an adaptive support estimate and the iteratively updated weights. In the iteratively reweighted L1-minimization sub-problem, different weights are equivalent to different regularization parameters at different locations. Thus, ASDR-L1 can be considered as a kind of spatially variant regularization methods for FMT. Physical phantom and in vivo mouse experiments were performed to validate the proposed algorithm. The results demonstrate that the proposed reweighted L1-reguarization algorithm can significantly improve the performance in terms of relative quantitation and spatial resolution.

  6. Stereo matching based on adaptive support-weight approach in RGB vector space.

    PubMed

    Geng, Yingnan; Zhao, Yan; Chen, Hexin

    2012-06-01

    Gradient similarity is a simple, yet powerful, data descriptor which shows robustness in stereo matching. In this paper, a RGB vector space is defined for stereo matching. Based on the adaptive support-weight approach, a matching algorithm, which uses the pixel gradient similarity, color similarity, and proximity in RGB vector space to compute the corresponding support-weights and dissimilarity measurements, is proposed. The experimental results are evaluated on the Middlebury stereo benchmark, showing that our algorithm outperforms other stereo matching algorithms and the algorithm with gradient similarity can achieve better results in stereo matching. PMID:22695592

  7. Building simplification algorithms based on user cognition in mobile environment

    NASA Astrophysics Data System (ADS)

    Shen, Jie; Shi, Junfei; Wang, Meizhen; Wu, Chenyan

    2008-10-01

    With the development of LBS, mobile map should adaptively satisfy the cognitive requirement of user. User cognition in mobile environment is much more objective oriented and also seem to be a heavier burden than the user in static environment. The holistic idea and methods of map generalization can not fully suitable for the mobile map. This paper took the building simplification in habitation generalization as example, analyzed the characteristic of user cognition in mobile environment and the basic rules of building simplification, collected and studied the state-of-the-art of algorithms of building simplification in the static and mobile environment, put forward the idea of hierarchical building simplification based on user cognition. This paper took Hunan road business district of Nanjing as test area and took the building data with shapfile format of ESRI as test data and realized the simplification algorithm. The method took user as center, calculated the distance between user and the building which will be simplified and took the distance as the basis for choosing different simplification algorithm for different spaces. This contribution aimed to hierarchically present the building in different level of detail by real-time simplification.

  8. Reducing interferences in wireless communication systems by mobile agents with recurrent neural networks-based adaptive channel equalization

    NASA Astrophysics Data System (ADS)

    Beritelli, Francesco; Capizzi, Giacomo; Lo Sciuto, Grazia; Napoli, Christian; Tramontana, Emiliano; Woźniak, Marcin

    2015-09-01

    Solving channel equalization problem in communication systems is based on adaptive filtering algorithms. Today, Mobile Agents (MAs) with Recurrent Neural Networks (RNNs) can be also adopted for effective interference reduction in modern wireless communication systems (WCSs). In this paper MAs with RNNs are proposed as novel computing algorithms for reducing interferences in WCSs performing an adaptive channel equalization. The method to provide it is so called MAs-RNNs. We perform the implementation of this new paradigm for interferences reduction. Simulations results and evaluations demonstrates the effectiveness of this approach and as better transmission performance in wireless communication network can be achieved by using the MAs-RNNs based adaptive filtering algorithm.

  9. Implementation of automatic white-balance based on adaptive-luminance

    NASA Astrophysics Data System (ADS)

    Zhong, Jian; Yao, Su-Ying; Xu, Jiang-Tao

    2009-03-01

    A novel automatic white-balance algorithm based on adaptive-luminance is proposed in this paper. This algorithm redefines the gray pixels region, which can filter the gray pixels accurately. Furthermore, with the relations between gray pixels’ luminance with standard light source and their chroma Cb, Cr shifts with other color temperatures, the algorithm establishes the equations between the captured pixels and the original ones, which can estimate the gains of RGB channels exactly. To evaluate the proposed algorithm, the objective comparison method and the subjective observation method are both used, and the test results prove that the effects of image emendated by the proposed algorithm are excelled to that by traditional algorithms. Finally, the algorithm is implemented with VLSI design, and the result of synthesis proves that it can satisfy real-time application.

  10. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang

    2010-11-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.

  11. Adaptive enhancement method of infrared image based on scene feature

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    All objects emit radiation in amounts related to their temperature and their ability to emit radiation. The infrared image shows the invisible infrared radiation emitted directly. Because of the advantages, the technology of infrared imaging is applied to many kinds of fields. But compared with visible image, the disadvantages of infrared image are obvious. The characteristics of low luminance, low contrast and the inconspicuous difference target and background are the main disadvantages of infrared image. The aim of infrared image enhancement is to improve the interpretability or perception of information in infrared image for human viewers, or to provide 'better' input for other automated image processing techniques. Most of the adaptive algorithm for image enhancement is mainly based on the gray-scale distribution of infrared image, and is not associated with the actual image scene of the features. So the pertinence of infrared image enhancement is not strong, and the infrared image is not conducive to the application of infrared surveillance. In this paper we have developed a scene feature-based algorithm to enhance the contrast of infrared image adaptively. At first, after analyzing the scene feature of different infrared image, we have chosen the feasible parameters to describe the infrared image. In the second place, we have constructed the new histogram distributing base on the chosen parameters by using Gaussian function. In the last place, the infrared image is enhanced by constructing a new form of histogram. Experimental results show that the algorithm has better performance than other methods mentioned in this paper for infrared scene images.

  12. Adaptive Cognitive-Based Selection of Learning Objects

    ERIC Educational Resources Information Center

    Karampiperis, Pythagoras; Lin, Taiyu; Sampson, Demetrios G.; Kinshuk

    2006-01-01

    Adaptive cognitive-based selection is recognized as among the most significant open issues in adaptive web-based learning systems. In order to adaptively select learning resources, the definition of adaptation rules according to the cognitive style or learning preferences of the learners is required. Although some efforts have been reported in…

  13. A new adaptive self-tuning Fourier coefficients algorithm for periodic torque ripple minimization in permanent magnet synchronous motors (PMSM).

    PubMed

    Gómez-Espinosa, Alfonso; Hernández-Guzmán, Víctor M; Bandala-Sánchez, Manuel; Jiménez-Hernández, Hugo; Rivas-Araiza, Edgar A; Rodríguez-Reséndiz, Juvenal; Herrera-Ruíz, Gilberto

    2013-03-19

    A New Adaptive Self-Tuning Fourier Coefficients Algorithm for Periodic Torque Ripple Minimization in Permanent Magnet Synchronous Motors (PMSM) Torque ripple occurs in Permanent Magnet Synchronous Motors (PMSMs) due to the non-sinusoidal flux density distribution around the air-gap and variable magnetic reluctance of the air-gap due to the stator slots distribution. These torque ripples change periodically with rotor position and are apparent as speed variations, which degrade the PMSM drive performance, particularly at low speeds, because of low inertial filtering. In this paper, a new self-tuning algorithm is developed for determining the Fourier Series Controller coefficients with the aim of reducing the torque ripple in a PMSM, thus allowing for a smoother operation. This algorithm adjusts the controller parameters based on the component's harmonic distortion in time domain of the compensation signal. Experimental evaluation is performed on a DSP-controlled PMSM evaluation platform. Test results obtained validate the effectiveness of the proposed self-tuning algorithm, with the Fourier series expansion scheme, in reducing the torque ripple.

  14. A new adaptive self-tuning Fourier coefficients algorithm for periodic torque ripple minimization in permanent magnet synchronous motors (PMSM).

    PubMed

    Gómez-Espinosa, Alfonso; Hernández-Guzmán, Víctor M; Bandala-Sánchez, Manuel; Jiménez-Hernández, Hugo; Rivas-Araiza, Edgar A; Rodríguez-Reséndiz, Juvenal; Herrera-Ruíz, Gilberto

    2013-01-01

    A New Adaptive Self-Tuning Fourier Coefficients Algorithm for Periodic Torque Ripple Minimization in Permanent Magnet Synchronous Motors (PMSM) Torque ripple occurs in Permanent Magnet Synchronous Motors (PMSMs) due to the non-sinusoidal flux density distribution around the air-gap and variable magnetic reluctance of the air-gap due to the stator slots distribution. These torque ripples change periodically with rotor position and are apparent as speed variations, which degrade the PMSM drive performance, particularly at low speeds, because of low inertial filtering. In this paper, a new self-tuning algorithm is developed for determining the Fourier Series Controller coefficients with the aim of reducing the torque ripple in a PMSM, thus allowing for a smoother operation. This algorithm adjusts the controller parameters based on the component's harmonic distortion in time domain of the compensation signal. Experimental evaluation is performed on a DSP-controlled PMSM evaluation platform. Test results obtained validate the effectiveness of the proposed self-tuning algorithm, with the Fourier series expansion scheme, in reducing the torque ripple. PMID:23519345

  15. PDE Based Algorithms for Smooth Watersheds.

    PubMed

    Hodneland, Erlend; Tai, Xue-Cheng; Kalisch, Henrik

    2016-04-01

    Watershed segmentation is useful for a number of image segmentation problems with a wide range of practical applications. Traditionally, the tracking of the immersion front is done by applying a fast sorting algorithm. In this work, we explore a continuous approach based on a geometric description of the immersion front which gives rise to a partial differential equation. The main advantage of using a partial differential equation to track the immersion front is that the method becomes versatile and may easily be stabilized by introducing regularization terms. Coupling the geometric approach with a proper "merging strategy" creates a robust algorithm which minimizes over- and under-segmentation even without predefined markers. Since reliable markers defined prior to segmentation can be difficult to construct automatically for various reasons, being able to treat marker-free situations is a major advantage of the proposed method over earlier watershed formulations. The motivation for the methods developed in this paper is taken from high-throughput screening of cells. A fully automated segmentation of single cells enables the extraction of cell properties from large data sets, which can provide substantial insight into a biological model system. Applying smoothing to the boundaries can improve the accuracy in many image analysis tasks requiring a precise delineation of the plasma membrane of the cell. The proposed segmentation method is applied to real images containing fluorescently labeled cells, and the experimental results show that our implementation is robust and reliable for a variety of challenging segmentation tasks.

  16. Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm

    NASA Astrophysics Data System (ADS)

    Choi, Shinkook; Baek, Jongduk

    2015-03-01

    In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.

  17. On the estimation algorithm used in adaptive performance optimization of turbofan engines

    NASA Technical Reports Server (NTRS)

    Espana, Martin D.; Gilyard, Glenn B.

    1993-01-01

    The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.

  18. A comparison of two adaptive algorithms for the control of active engine mounts

    NASA Astrophysics Data System (ADS)

    Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.

    2005-08-01

    This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.

  19. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.

    PubMed

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-03-28

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA.

  20. Adaptive Inverse Hyperbolic Tangent Algorithm for Dynamic Contrast Adjustment in Displaying Scenes

    NASA Astrophysics Data System (ADS)

    Yu, Cheng-Yi; Ouyang, Yen-Chieh; Wang, Chuin-Mu; Chang, Chein-I.

    2010-12-01

    Contrast has a great influence on the quality of an image in human visual perception. A poorly illuminated environment can significantly affect the contrast ratio, producing an unexpected image. This paper proposes an Adaptive Inverse Hyperbolic Tangent (AIHT) algorithm to improve the display quality and contrast of a scene. Because digital cameras must maintain the shadow in a middle range of luminance that includes a main object such as a face, a gamma function is generally used for this purpose. However, this function has a severe weakness in that it decreases highlight contrast. To mitigate this problem, contrast enhancement algorithms have been designed to adjust contrast to tune human visual perception. The proposed AIHT determines the contrast levels of an original image as well as parameter space for different contrast types so that not only the original histogram shape features can be preserved, but also the contrast can be enhanced effectively. Experimental results show that the proposed algorithm is capable of enhancing the global contrast of the original image adaptively while extruding the details of objects simultaneously.

  1. Physically constrained voxel-based penalty adaptation for ultra-fast IMRT planning.

    PubMed

    Wahl, Niklas; Bangert, Mark; Kamerling, Cornelis P; Ziegenhein, Peter; Bol, Gijsbert H; Raaymakers, Bas W; Oelfke, Uwe

    2016-07-08

    Conventional treatment planning in intensity-modulated radiation therapy (IMRT) is a trial-and-error process that usually involves tedious tweaking of optimization parameters. Here, we present an algorithm that automates part of this process, in particular the adaptation of voxel-based penalties within normal tissue. Thereby, the proposed algorithm explicitly considers a priori known physical limitations of photon irradiation. The efficacy of the developed algorithm is assessed during treatment planning studies comprising 16 prostate and 5 head and neck cases. We study the eradication of hot spots in the normal tissue, effects on target coverage and target conformity, as well as selected dose volume points for organs at risk. The potential of the proposed method to generate class solutions for the two indications is investigated. Run-times of the algorithms are reported. Physically constrained voxel-based penalty adaptation is an adequate means to automatically detect and eradicate hot-spots during IMRT planning while maintaining target coverage and conformity. Negative effects on organs at risk are comparably small and restricted to lower doses. Using physically constrained voxel-based penalty adaptation, it was possible to improve the generation of class solutions for both indications. Considering the reported run-times of less than 20 s, physically constrained voxel-based penalty adaptation has the potential to reduce the clinical workload during planning and automated treatment plan generation in the long run, facilitating adaptive radiation treatments.

  2. Adaptive multiwavelet-based watermarking through JPW masking.

    PubMed

    Cui, Lihong; Li, Wenguo

    2011-04-01

    In this paper, a multibit, multiplicative, spread spectrum watermarking using the discrete multiwavelet (including unbalanced and balanced multiwavelet) transform is presented. Performance improvement with respect to existing algorithm is obtained by means of a new just perceptual weighting (JPW) model. The new model incorporates various masking effects of human visual perception by taking into account the eye's sensitivity to noise changes depending on spatial frequency, luminance and texture of all the image subbands. In contrast to conventional JND threshold model, JPW describing minimum perceptual sensitivity weighting to noise changes, is fitter for nonadditive watermarking. Specifically, watermarking strength is adaptively adjusted to obtain minimum perceptual distortion by employing the JPW model. Correspondingly, an adaptive optimum decoding is derived using a statistic model based on generalized-Gaussian distribution (GGD) for multiwavelet coefficients of the cover-image. Furthermore, the impact of multiwavelet characteristics on proposed watermarking scheme is also analyzed. Finally, the experimental results show that proposed JPW model can improve the quality of the watermarked image and give more robustness of the watermark as compared with a variety of state-of-the-art algorithms.

  3. Adaptive Guidance and Control Algorithms applied to the X-38 Reentry Mission

    NASA Astrophysics Data System (ADS)

    Graesslin, M.; Wallner, E.; Burkhardt, J.; Schoettle, U.; Well, K. H.

    International Space Station's Crew Return/Rescue Vehicle (CRV) is planned to autonomously return the complete crew of 7 astronauts back to earth in case of an emergency. As prototype of such a vehicle, the X-38, is being developed and built by NASA with European participation. The X-38 is a lifting body with a hyper- sonic lift to drag ratio of about 0.9. In comparison to the Space Shuttle Orbiter, the X-38 has less aerodynamic manoeuvring capability and less actuators. Within the German technology programme TETRA (TEchnologies for future space TRAnsportation systems) contributing to the X-38 program, guidance and control algorithms have been developed and applied to the X-38 reentry mission. The adaptive guidance concept conceived combines an on-board closed-loop predictive guidance algorithm with flight load control that temporarily overrides the attitude commands of the predictive component if the corre- sponding load constraints are violated. The predictive guidance scheme combines an optimization step and a sequence of constraint restoration cycles. In order to satisfy on-board computation limitations the complete scheme is performed only during the exo-atmospheric flight coast phase. During the controlled atmospheric flight segment the task is reduced to a repeatedly solved targeting problem based on the initial optimal solution, thus omitting in-flight constraints. To keep the flight loads - especially the heat flux, which is in fact a major concern of the X-38 reentry flight - below their maximum admissible values, a flight path controller based on quadratic minimization techniques may override the predictive guidance command for a flight along the con- straint boundary. The attitude control algorithms developed are based on dynamic inversion. This methodology enables the designer to straightforwardly devise a controller structure from the system dynamics. The main ad- vantage of this approach with regard to reentry control design lies in the fact that

  4. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  5. A new algorithm for grid-based hydrologic analysis by incorporating stormwater infrastructure

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Yi, Huiuk; Park, Hyeong-Dong

    2011-08-01

    We developed a new algorithm, the Adaptive Stormwater Infrastructure (ASI) algorithm, to incorporate ancillary data sets related to stormwater infrastructure into the grid-based hydrologic analysis. The algorithm simultaneously considers the effects of the surface stormwater collector network (e.g., diversions, roadside ditches, and canals) and underground stormwater conveyance systems (e.g., waterway tunnels, collector pipes, and culverts). The surface drainage flows controlled by the surface runoff collector network are superimposed onto the flow directions derived from a DEM. After examining the connections between inlets and outfalls in the underground stormwater conveyance system, the flow accumulation and delineation of watersheds are calculated based on recursive computations. Application of the algorithm to the Sangdong tailings dam in Korea revealed superior performance to that of a conventional D8 single-flow algorithm in terms of providing reasonable hydrologic information on watersheds with stormwater infrastructure.

  6. Study on algorithm for night vision panoramic image basing on image segmentation and multimode displaying technology

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhai; Li, Kejie

    2009-07-01

    Based on single panoramic annular lens optical system and external-low-luminance CCD sensors, 360-degree panoramic night vision image processing hardware platform were established. The night vision panoramic image algorithm was presented, grounding on the image segmentation and multimode displaying technology. The annular image can be unwrapped and corrected to conventional rectangular panorama by the panoramic image unwrapping algorithm. The night vision image enhancement algorithm, based on adaptive piecewise linear gray transformation (APLGT) and Laplacian of Gaussian (LOG) edge detection, were given. APLGT algorithm can be adaptively truncate the image histogram on both ends to obtain a smaller dynamic range so as to enhance the contrast of the night vision image. LOG algorithm can be propitious to find and detect dim small targets in night vision circumstance. After abundant experiment, the algorithm for night vision panoramic image was successfully implemented in TMS320DM642, basing on the image Segmentation and multimode displaying algorithm. And the system can reliably and dynamically detect 360-degree view field of panoramic night vision image.

  7. Optimizing weather radar observations using an adaptive multiquadric surface fitting algorithm

    NASA Astrophysics Data System (ADS)

    Martens, Brecht; Cabus, Pieter; De Jongh, Inge; Verhoest, Niko

    2013-04-01

    .e. the observed scaling factors C(xα)) on a distance aαK by introducing an offset parameter K, which results in slightly different equations to calculate a and a0. The described technique is currently being used by the Flemish Environmental Agency in an online forecasting system of river discharges within Flanders (Belgium). However, rescaling the radar data using the described algorithm is not always giving rise to an improved weather radar product. Probably one of the main reasons is the parameters K and ? which are implemented as constants. It can be expected that, among others, depending on the characteristics of the rainfall, different values for the parameters should be used. Adaptation of the parameter values is achieved by an online calibration of K and ? at each time step (every 15 minutes), using validated rain gauge measurements as ground truth. Results demonstrate that rescaling radar images using optimized values for K and ? at each time step lead to a significant improvement of the rainfall estimation, which in turn will result in higher quality discharge predictions. Moreover, it is shown that calibrated values for K and ? can be obtained in near-real time. References Cole, S. J., and Moore, R. J. (2008). Hydrological modelling using raingauge- and radar-based estimators of areal rainfall. Journal of Hydrology, 358(3-4), 159-181. Hardy, R.L., (1971) Multiquadric equations of topography and other irregular surfaces, Journal of Geophysical Research, 76(8): 1905-1915. Moore, R. J., Watson, B. C., Jones, D. A. and Black, K. B. (1989). London weather radar local calibration study. Technical report, Institute of Hydrology.

  8. Validation of Learning Effort Algorithm for Real-Time Non-Interfering Based Diagnostic Technique

    ERIC Educational Resources Information Center

    Hsu, Pi-Shan; Chang, Te-Jeng

    2011-01-01

    The objective of this research is to validate the algorithm of learning effort which is an indicator of a new real-time and non-interfering based diagnostic technique. IC3 Mentor, the adaptive e-learning platform fulfilling the requirements of intelligent tutor system, was applied to 165 university students. The learning records of the subjects…

  9. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.

  10. Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan

    1997-01-01

    A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.

  11. EMINIM: An Adaptive and Memory-Efficient Algorithm for Genotype Imputation

    PubMed Central

    Kang, Hyun Min; Zaitlen, Noah A.

    2010-01-01

    Abstract Genome-wide association studies have proven to be a highly successful method for identification of genetic loci for complex phenotypes in both humans and model organisms. These large scale studies rely on the collection of hundreds of thousands of single nucleotide polymorphisms (SNPs) across the genome. Standard high-throughput genotyping technologies capture only a fraction of the total genetic variation. Recent efforts have shown that it is possible to “impute” with high accuracy the genotypes of SNPs that are not collected in the study provided that they are present in a reference data set which contains both SNPs collected in the study as well as other SNPs. We here introduce a novel HMM based technique to solve the imputation problem that addresses several shortcomings of existing methods. First, our method is adaptive which lets it estimate population genetic parameters from the data and be applied to model organisms that have very different evolutionary histories. Compared to previous methods, our method is up to ten times more accurate on model organisms such as mouse. Second, our algorithm scales in memory usage in the number of collected markers as opposed to the number of known SNPs. This issue is very relevant due to the size of the reference data sets currently being generated. We compare our method over mouse and human data sets to existing methods, and show that each has either comparable or better performance and much lower memory usage. The method is available for download at http://genetics.cs.ucla.edu/eminim. PMID:20377463

  12. Rainfall Estimation over the Nile Basin using an Adapted Version of the SCaMPR Algorithm

    NASA Astrophysics Data System (ADS)

    Habib, E. H.; Kuligowski, R. J.; Elshamy, M. E.; Ali, M. A.; Haile, A.; Amin, D.; Eldin, A.

    2011-12-01

    Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite-derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). This study reports on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self-Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application over the Nile Basin. The algorithm uses a set of rainfall predictors from multi-spectral Infrared cloud top observations and self-calibrates them to a set of predictands from Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as SSM/I, SSMIS, AMSU, AMSR-E, and TMI. The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real-time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA

  13. An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling

    SciTech Connect

    LI, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly important for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and En

  14. Widefield multiphoton microscopy with image-based adaptive optics

    NASA Astrophysics Data System (ADS)

    Chang, C.-Y.; Cheng, L.-C.; Su, H.-W.; Yen, W.-C.; Chen, S.-J.

    2012-10-01

    Unlike conventional multiphoton microscopy according to pixel by pixel point scanning, a widefield multiphoton microscope based on spatiotemporal focusing has been developed to provide fast optical sectioning images at a frame rate over 100 Hz. In order to overcome the aberrations of the widefield multiphoton microscope and the wavefront distortion from turbid biospecimens, an image-based adaptive optics system (AOS) was integrated. The feedback control signal of the AOS was acquired according to locally maximize image intensity, which were provided by the widefield multiphoton excited microscope, by using a hill climbing algorithm. Then, the control signal was utilized to drive a deformable mirror in such a way as to eliminate the aberration and distortion. A R6G-doped PMMA thin film is also increased by 3.7-fold. Furthermore, the TPEF image quality of 1 μm fluorescent beads sealed in agarose gel at different depths is improved.

  15. Optimization-based wavefront sensorless adaptive optics for multiphoton microscopy.

    PubMed

    Antonello, Jacopo; van Werkhoven, Tim; Verhaegen, Michel; Truong, Hoa H; Keller, Christoph U; Gerritsen, Hans C

    2014-06-01

    Optical aberrations have detrimental effects in multiphoton microscopy. These effects can be curtailed by implementing model-based wavefront sensorless adaptive optics, which only requires the addition of a wavefront shaping device, such as a deformable mirror (DM) to an existing microscope. The aberration correction is achieved by maximizing a suitable image quality metric. We implement a model-based aberration correction algorithm in a second-harmonic microscope. The tip, tilt, and defocus aberrations are removed from the basis functions used for the control of the DM, as these aberrations induce distortions in the acquired images. We compute the parameters of a quadratic polynomial that is used to model the image quality metric directly from experimental input-output measurements. Finally, we apply the aberration correction by maximizing the image quality metric using the least-squares estimate of the unknown aberration.

  16. A New Aloha Anti-Collision Algorithm Based on CDMA

    NASA Astrophysics Data System (ADS)

    Bai, Enjian; Feng, Zhu

    The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.

  17. An improved space-based algorithm for recognizing vehicle models from the side view

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Ding, Youdong; Zhang, Li; Li, Rong; Zhu, Jiang; Xie, Zhifeng

    2015-12-01

    Vehicle model matching problem from the side view is a problem meets the practical needs of actual users, but less focus by researchers. We propose a improved feature space-based algorithm for this problem. The algorithm combines the various advantages of some classic algorithms, and effectively combining global and local feature, eliminate data redundancy and improve data divisibility. And finally complete the classification by quick and efficient KNN. The real scene test results show that the proposed method is robust, accurate, insensitive to external factors, adaptable to large angle deviations, and can be applied to a formal application.

  18. Vision-based algorithms for near-host object detection and multilane sensing

    NASA Astrophysics Data System (ADS)

    Kenue, Surender K.

    1995-01-01

    Vision-based sensing can be used for lane sensing, adaptive cruise control, collision warning, and driver performance monitoring functions of intelligent vehicles. Current computer vision algorithms are not robust for handling multiple vehicles in highway scenarios. Several new algorithms are proposed for multi-lane sensing, near-host object detection, vehicle cut-in situations, and specifying regions of interest for object tracking. These algorithms were tested successfully on more than 6000 images taken from real-highway scenes under different daytime lighting conditions.

  19. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-11-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.

  20. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  1. A nonlinear model reference adaptive inverse control algorithm with pre-compensator

    NASA Astrophysics Data System (ADS)

    Xiao, Bin; Yang, Tie-Jun; Liu, Zhi-Gang

    2005-12-01

    In this paper, the reduced-order modeling (ROM) technology and its corresponding linear theory are expanded from the linear dynamic system to the nonlinear one, and H ∞ control theory is employed in the frequency domain to design some nonlinear system s pre-compensator in some special way. The adaptive model inverse control (AMIC) theory coping with nonlinear system is improved as well. Such is the model reference adaptive inverse control with pre-compensator (PCMRAIC). The aim of that algorithm is to construct a strategy of control as a whole. As a practical example of the application, the numerical simulation has been given on matlab software packages. The numerical result is given. The proposed strategy realizes the linearization control of nonlinear dynamic system. And it carries out a good performance to deal with the nonlinear system.

  2. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    SciTech Connect

    Lomov, I; Pember, R; Greenough, J; Liu, B

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.

  3. Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm.

    PubMed

    Ulusoy, Melda; Sipahi, Rifat

    2016-01-01

    Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as "reversal". Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one's finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects' performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects' Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise.

  4. Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm

    PubMed Central

    2016-01-01

    Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as “reversal”. Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one’s finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects’ performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects’ Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise. PMID:26849058

  5. An improved localization algorithm based on genetic algorithm in wireless sensor networks.

    PubMed

    Peng, Bo; Li, Lei

    2015-04-01

    Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.

  6. Adaptive implicit-explicit finite element algorithms for fluid mechanics problems

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Liou, J.

    1988-01-01

    The adaptive implicit-explicit (AIE) approach is presented for the finite-element solution of various problems in computational fluid mechanics. In the AIE approach, the elements are dynamically (adaptively) arranged into differently treated groups. The differences in treatment could be based on considerations such as the cost efficiency, the type of spatial or temporal discretization employed, the choice of field equations, etc. Several numerical tests are performed to demonstrate that this approach can achieve substantial savings in CPU time and memory.

  7. Cordic based algorithms for software defined radio (SDR) baseband processing

    NASA Astrophysics Data System (ADS)

    Heyne, B.; Götze, J.

    2006-09-01

    This paper presents two Cordic based algorithms which may be used for digital baseband processing in OFDM and/or CDMA based communication systems. The first one is a linear least squares based multiuser detector for CDMA incorporating descrambling and despreading. The second algorithm is a pure Cordic based FFT implementation. Both algorithms can be implemented using solely Cordic based architectures (e.g. coprocessors or ASIPs). The algorithms exactly fit the needs of a multistandard terminal as they both are freely parameterizable. This regards to the accuracy of the results as well as to the parameters of the performed function (e.g. size of the FFT).

  8. An incremental clustering algorithm based on Mahalanobis distance

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Choon, Tan Wee

    2014-12-01

    Classical fuzzy c-means clustering algorithm is insufficient to cluster non-spherical or elliptical distributed datasets. The paper replaces classical fuzzy c-means clustering euclidean distance with Mahalanobis distance. It applies Mahalanobis distance to incremental learning for its merits. A Mahalanobis distance based fuzzy incremental clustering learning algorithm is proposed. Experimental results show the algorithm is an effective remedy for the defect in fuzzy c-means algorithm but also increase training accuracy.

  9. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895

  10. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    PubMed Central

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895

  11. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  12. NORSAR Final Scientific Report Adaptive Waveform Correlation Detectors for Arrays: Algorithms for Autonomous Calibration

    SciTech Connect

    Gibbons, S J; Ringdal, F; Harris, D B

    2009-04-16

    Correlation detection is a relatively new approach in seismology that offers significant advantages in increased sensitivity and event screening over standard energy detection algorithms. The basic concept is that a representative event waveform is used as a template (i.e. matched filter) that is correlated against a continuous, possibly multichannel, data stream to detect new occurrences of that same signal. These algorithms are therefore effective at detecting repeating events, such as explosions and aftershocks at a specific location. This final report summarizes the results of a three-year cooperative project undertaken by NORSAR and Lawrence Livermore National Laboratory. The overall objective has been to develop and test a new advanced, automatic approach to seismic detection using waveform correlation. The principal goal is to develop an adaptive processing algorithm. By this we mean that the detector is initiated using a basic set of reference ('master') events to be used in the correlation process, and then an automatic algorithm is applied successively to provide improved performance by extending the set of master events selectively and strategically. These additional master events are generated by an independent, conventional detection system. A periodic analyst review will then be applied to verify the performance and, if necessary, adjust and consolidate the master event set. A primary focus of this project has been the application of waveform correlation techniques to seismic arrays. The basic procedure is to perform correlation on the individual channels, and then stack the correlation traces using zero-delay beam forming. Array methods such as frequency-wavenumber analysis can be applied to this set of correlation traces to help guarantee the validity of detections and lower the detection threshold. In principle, the deployment of correlation detectors against seismically active regions could involve very large numbers of very specific detectors. To

  13. The application of Firefly algorithm in an Adaptive Emergency Evacuation Centre Management (AEECM) for dynamic relocation of flood victims

    NASA Astrophysics Data System (ADS)

    ChePa, Noraziah; Hashim, Nor Laily; Yusof, Yuhanis; Hussain, Azham

    2016-08-01

    Flood evacuation centre is defined as a temporary location or area of people from disaster particularly flood as a rescue or precautionary measure. Gazetted evacuation centres are normally located at secure places which have small chances from being drowned by flood. However, due to extreme flood several evacuation centres in Kelantan were unexpectedly drowned. Currently, there is no study done on proposing a decision support aid to reallocate victims and resources of the evacuation centre when the situation getting worsens. Therefore, this study proposes a decision aid model to be utilized in realizing an adaptive emergency evacuation centre management system. This study undergoes two main phases; development of algorithm and models, and development of a web-based and mobile app. The proposed model operates using Firefly multi-objective optimization algorithm that creates an optimal schedule for the relocation of victims and resources for an evacuation centre. The proposed decision aid model and the adaptive system can be applied in supporting the National Security Council's respond mechanisms for handling disaster management level II (State level) especially in providing better management of the flood evacuating centres.

  14. Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci

    NASA Astrophysics Data System (ADS)

    Kosmale, Miriam; Popp, Thomas

    2016-04-01

    Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.

  15. A double-loop structure in the adaptive generalized predictive control algorithm for control of robot end-point contact force.

    PubMed

    Wen, Shuhuan; Zhu, Jinghai; Li, Xiaoli; Chen, Shengyong

    2014-09-01

    Robot force control is an essential issue in robotic intelligence. There is much high uncertainty when robot end-effector contacts with the environment. Because of the environment stiffness effects on the system of the robot end-effector contact with environment, the adaptive generalized predictive control algorithm based on quantitative feedback theory is designed for robot end-point contact force system. The controller of the internal loop is designed on the foundation of QFT to control the uncertainty of the system. An adaptive GPC algorithm is used to design external loop controller to improve the performance and the robustness of the system. Two closed loops used in the design approach realize the system׳s performance and improve the robustness. The simulation results show that the algorithm of the robot end-effector contacting force control system is effective. PMID:24973336

  16. A double-loop structure in the adaptive generalized predictive control algorithm for control of robot end-point contact force.

    PubMed

    Wen, Shuhuan; Zhu, Jinghai; Li, Xiaoli; Chen, Shengyong

    2014-09-01

    Robot force control is an essential issue in robotic intelligence. There is much high uncertainty when robot end-effector contacts with the environment. Because of the environment stiffness effects on the system of the robot end-effector contact with environment, the adaptive generalized predictive control algorithm based on quantitative feedback theory is designed for robot end-point contact force system. The controller of the internal loop is designed on the foundation of QFT to control the uncertainty of the system. An adaptive GPC algorithm is used to design external loop controller to improve the performance and the robustness of the system. Two closed loops used in the design approach realize the system׳s performance and improve the robustness. The simulation results show that the algorithm of the robot end-effector contacting force control system is effective.

  17. Fuzzy-based adaptive bandwidth control for loss guarantees.

    PubMed

    Siripongwutikorn, Peerapon; Banerjee, Sujata; Tipper, David

    2005-09-01

    This paper presents the use of adaptive bandwidth control (ABC) for a quantitative packet loss rate guarantee to aggregate traffic in packet switched networks. ABC starts with some initial amount of bandwidth allocated to a queue and adjusts it over time based on online measurements of system states to ensure that the allocated bandwidth is just enough to attain the specified loss requirement. Consequently, no a priori detailed traffic information is required, making ABC more suitable for efficient aggregate quality of service (QoS) provisioning. We propose an ABC algorithm called augmented Fuzzy (A-Fuzzy) control, whereby fuzzy logic control is used to keep an average queue length at an appropriate target value, and the measured packet loss rate is used to augment the standard control to achieve better performance. An extensive simulation study based on both theoretical traffic models and real traffic traces under a wide range of system configurations demonstrates that the A-Fuzzy control itself is highly robust, yields high bandwidth utilization, and is indeed a viable alternative and improvement to static bandwidth allocation (SBA) and existing adaptive bandwidth allocation schemes. Additionally, we develop a simple and efficient measurement-based admission control procedure which limits the amount of input traffic in order to maintain the performance of the A-Fuzzy control at an acceptable level.

  18. Mouse EEG spike detection based on the adapted continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Tieng, Quang M.; Kharatishvili, Irina; Chen, Min; Reutens, David C.

    2016-04-01

    Objective. Electroencephalography (EEG) is an important tool in the diagnosis of epilepsy. Interictal spikes on EEG are used to monitor the development of epilepsy and the effects of drug therapy. EEG recordings are generally long and the data voluminous. Thus developing a sensitive and reliable automated algorithm for analyzing EEG data is necessary. Approach. A new algorithm for detecting and classifying interictal spikes in mouse EEG recordings is proposed, based on the adapted continuous wavelet transform (CWT). The construction of the adapted mother wavelet is founded on a template obtained from a sample comprising the first few minutes of an EEG data set. Main Result. The algorithm was tested with EEG data from a mouse model of epilepsy and experimental results showed that the algorithm could distinguish EEG spikes from other transient waveforms with a high degree of sensitivity and specificity. Significance. Differing from existing approaches, the proposed approach combines wavelet denoising, to isolate transient signals, with adapted CWT-based template matching, to detect true interictal spikes. Using the adapted wavelet constructed from a predefined template, the adapted CWT is calculated on small EEG segments to fit dynamical changes in the EEG recording.

  19. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures

    PubMed Central

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  20. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.

    PubMed

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  1. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.

    PubMed

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer.

  2. KNOWBOT; An adaptive data base interface

    SciTech Connect

    Heger, A.S.; Koen, B.U. . Dept. of Mechanical Engineering)

    1991-02-01

    This paper reports on an adaptive interface KNOWBOT designed to solve some of the problems that face the users of large centralized data bases. The interface applies the neural network approach to information retrieval from a data base. The data base is a subset of the Nuclear Plant Reliability Data System. The interface KNOWBOT preempts an existing data base interface and works in conjunction with it. By design, KNOWBOT starts as a tabula rasa but acquires knowledge through its interactions with the user and the data base. The interface uses its gained knowledge to personalize the data base retrieval process and to induce new queries. The interface also forgets the information that is no longer needed by the user. These self-organizing features of the interface reduce the scope of the data base to the subsets that are highly relevant to the user needs. A proof-of-principal version of this interface has been implemented in Common LISP on a Texas Instruments Explorer I workstation. Experiments with KNOWBOT have been successful in demonstrating the robustness of the model especially with induction and self-organization. This paper describes the design of KNOWBOT and presents some of the experimental results.

  3. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  4. An efficient self-adaptive model for chaotic image encryption algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Xiaoling; Ye, Guodong

    2014-12-01

    In this paper, an efficient self-adaptive model for chaotic image encryption algorithm is proposed. With the help of the classical structure of permutation-diffusion and double simple two-dimensional chaotic systems, an efficient and fast encryption algorithm is designed. However, different from most of the existing methods which are found insecure upon chosen-plaintext or known-plaintext attack in the process of permutation or diffusion, the keystream generated in both operations of our method is dependent on the plain-image. Therefore, different plain-images will have different keystreams in both processes even just only a bit is changed in the plain-image. This design can solve the problem of fixed chaotic sequence produced by the same initial conditions but for different images. Moreover, the operation speed is high because complex mathematical methods, such as Runge-Kutta method, of solving the high-dimensional partial differential equations are avoided. Numerical experiments show that the proposed self-adaptive method can well resist against chosen-plaintext and known-plaintext attacks, and has high security and efficiency.

  5. Optical Cluster-Finding with an Adaptive Matched-Filter Technique: Algorithm and Comparison with Simulations

    SciTech Connect

    Dong, Feng; Pierpaoli, Elena; Gunn, James E.; Wechsler, Risa H.

    2007-10-29

    We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is {approx} 85% complete and over 90% pure for clusters with masses above 1.0 x 10{sup 14}h{sup -1} M and redshifts up to z = 0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensity of {Delta} = 200, we find the derived cluster richness {Lambda}{sub 200} a roughly linear indicator of its virial mass M{sub 200}, which well recovers the relation between total luminosity and cluster mass of the input simulation.

  6. A New Real-coded Genetic Algorithm with an Adaptive Mating Selection for UV-landscapes

    NASA Astrophysics Data System (ADS)

    Oshima, Dan; Miyamae, Atsushi; Nagata, Yuichi; Kobayashi, Shigenobu; Ono, Isao; Sakuma, Jun

    The purpose of this paper is to propose a new real-coded genetic algorithm (RCGA) named Networked Genetic Algorithm (NGA) that intends to find multiple optima simultaneously in deceptive globally multimodal landscapes. Most current techniques such as niching for finding multiple optima take into account big valley landscapes or non-deceptive globally multimodal landscapes but not deceptive ones called UV-landscapes. Adaptive Neighboring Search (ANS) is a promising approach for finding multiple optima in UV-landscapes. ANS utilizes a restricted mating scheme with a crossover-like mutation in order to find optima in deceptive globally multimodal landscapes. However, ANS has a fundamental problem that it does not find all the optima simultaneously in many cases. NGA overcomes the problem by an adaptive parent-selection scheme and an improved crossover-like mutation. We show the effectiveness of NGA over ANS in terms of the number of detected optima in a single run on Fletcher and Powell functions as benchmark problems that are known to have multiple optima, ill-scaledness, and UV-landscapes.

  7. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  8. Real time algorithm invariant to natural lighting with LBP techniques through an adaptive thresholding implemented in GPU processors

    NASA Astrophysics Data System (ADS)

    Orjuela-Vargas, S. A.; Triana-Martinez, J.; Yañez, J. P.; Philips, W.

    2014-03-01

    Video analysis in real time requires fast and efficient algorithms to extract relevant information from a considerable number, commonly 25, of frames per second. Furthermore, robust algorithms for outdoor visual scenes may retrieve correspondent features along the day where a challenge is to deal with lighting changes. Currently, Local Binary Pattern (LBP) techniques are widely used for extracting features due to their robustness to illumination changes and the low requirements for implementation. We propose to compute an automatic threshold based on the distribution of the intensity residuals resulting from the pairwise comparisons when using LBP techniques. The intensity residuals distribution can be modelled by a Generalized Gaussian Distribution (GGD). In this paper we compute the adaptive threshold using the parameters of the GGD. We present a CUDA implementation of our proposed algorithm. We use the LBPSYM technique. Our approach is tested on videos of four different urban scenes with mobilities captured during day and night. The extracted features can be used in a further step to determine patterns, identify objects or detect background. However, further research must be conducted for blurring correction since the scenes at night are commonly blurred due to artificial lighting.

  9. Convergence study of various non-quadratic adaptive algorithms in the equalization of impulsive DS-CDMA channel

    NASA Astrophysics Data System (ADS)

    Jimaa, Shihab A.; Jadah, Mohamed E.

    2005-10-01

    This paper investigates the performance of using various non-quadratic adaptive algorithms in the adaptation of a non-linear receiver, coupled with a second-order phase tracking subsystem, for asynchronous DS-CDMA communication system impaired by double-spread multipath channel and Gaussian mixture impulsive noise. These algorithms are the lower order (where the power of the cost function is lower than 2), the least-mean mixed norm (where a mixed-norm function is introduced, which combines the LMS and the LMF functions), and the least mean square-fourth switching (where this algorithm switches between LMS and LMF depending on the value of the error). The non-linear receiver comprises feed-forward filter (FFF), feedback filter (FBF), and an equalizer/second order phase locked loop (PLL). The investigations study the effect of using the proposed algorithms on the performance of the non-linear receiver in terms of the mean-square error (MSE) and bit-error-rate (BER). Computer simulation results indicate that the least-mean mixed proposed receiver's algorithm gives the fastest convergence rate and similar BER performance, in comparison with the NLMS adaptive receiver. Furthermore, extensive computer simulation tests have been carried out to determine the optimum values of the step-size, the power of the cost function, and the adaptation parameter of the proposed algorithms. Results show that the optimum values of the step-size for the lower-order, least-mean square fourth, least-mean mixed norm, and the NLMS algorithms are 5x10 -4, 10 -6, 5x10 -4, and 0.01, respectively. The optimum value of the power of the lower-order algorithm is 1.9 and the optimum value of the adaptation parameter of the least-mean mixed algorithm is 0.9.

  10. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-03-19

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.

  11. Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images

    PubMed Central

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  12. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  13. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.

    PubMed

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  14. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors

    PubMed Central

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  15. A local anisotropic adaptive algorithm for the solution of low-Mach transient combustion problems

    NASA Astrophysics Data System (ADS)

    Carpio, Jaime; Prieto, Juan Luis; Vera, Marcos

    2016-02-01

    A novel numerical algorithm for the simulation of transient combustion problems at low Mach and moderately high Reynolds numbers is presented. These problems are often characterized by the existence of a large disparity of length and time scales, resulting in the development of directional flow features, such as slender jets, boundary layers, mixing layers, or flame fronts. This makes local anisotropic adaptive techniques quite advantageous computationally. In this work we propose a local anisotropic refinement algorithm using, for the spatial discretization, unstructured triangular elements in a finite element framework. For the time integration, the problem is formulated in the context of semi-Lagrangian schemes, introducing the semi-Lagrange-Galerkin (SLG) technique as a better alternative to the classical semi-Lagrangian (SL) interpolation. The good performance of the numerical algorithm is illustrated by solving a canonical laminar combustion problem: the flame/vortex interaction. First, a premixed methane-air flame/vortex interaction with simplified transport and chemistry description (Test I) is considered. Results are found to be in excellent agreement with those in the literature, proving the superior performance of the SLG scheme when compared with the classical SL technique, and the advantage of using anisotropic adaptation instead of uniform meshes or isotropic mesh refinement. As a more realistic example, we then conduct simulations of non-premixed hydrogen-air flame/vortex interactions (Test II) using a more complex combustion model which involves state-of-the-art transport and chemical kinetics. In addition to the analysis of the numerical features, this second example allows us to perform a satisfactory comparison with experimental visualizations taken from the literature.

  16. Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm

    PubMed Central

    Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods. PMID:26649071

  17. A Self-adaptive Evolutionary Algorithm for Multi-objective Optimization

    NASA Astrophysics Data System (ADS)

    Cao, Ruifen; Li, Guoli; Wu, Yican

    Evolutionary algorithm has gained a worldwide popularity among multi-objective optimization. The paper proposes a self-adaptive evolutionary algorithm (called SEA) for multi-objective optimization. In the SEA, the probability of crossover and mutation,P c and P m , are varied depending on the fitness values of the solutions. Fitness assignment of SEA realizes the twin goals of maintaining diversity in the population and guiding the population to the true Pareto Front; fitness value of individual not only depends on improved density estimation but also depends on non-dominated rank. The density estimation can keep diversity in all instances including when scalars of all objectives are much different from each other. SEA is compared against the Non-dominated Sorting Genetic Algorithm (NSGA-II) on a set of test problems introduced by the MOEA community. Simulated results show that SEA is as effective as NSGA-II in most of test functions, but when scalar of objectives are much different from each other, SEA has better distribution of non-dominated solutions.

  18. Automatic classification of schizophrenia using resting-state functional language network via an adaptive learning algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Maohu; Jie, Nanfeng; Jiang, Tianzi

    2014-03-01

    A reliable and precise classification of schizophrenia is significant for its diagnosis and treatment of schizophrenia. Functional magnetic resonance imaging (fMRI) is a novel tool increasingly used in schizophrenia research. Recent advances in statistical learning theory have led to applying pattern classification algorithms to access the diagnostic value of functional brain networks, discovered from resting state fMRI data. The aim of this study was to propose an adaptive learning algorithm to distinguish schizophrenia patients from normal controls using resting-state functional language network. Furthermore, here the classification of schizophrenia was regarded as a sample selection problem where a sparse subset of samples was chosen from the labeled training set. Using these selected samples, which we call informative vectors, a classifier for the clinic diagnosis of schizophrenia was established. We experimentally demonstrated that the proposed algorithm incorporating resting-state functional language network achieved 83.6% leaveone- out accuracy on resting-state fMRI data of 27 schizophrenia patients and 28 normal controls. In contrast with KNearest- Neighbor (KNN), Support Vector Machine (SVM) and l1-norm, our method yielded better classification performance. Moreover, our results suggested that a dysfunction of resting-state functional language network plays an important role in the clinic diagnosis of schizophrenia.

  19. Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm.

    PubMed

    Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods.

  20. A new spectral variable selection pattern using competitive adaptive reweighted sampling combined with successive projections algorithm.

    PubMed

    Tang, Guo; Huang, Yue; Tian, Kuangda; Song, Xiangzhong; Yan, Hong; Hu, Jing; Xiong, Yanmei; Min, Shungeng

    2014-10-01

    The competitive adaptive reweighted sampling-successive projections algorithm (CARS-SPA) method was proposed as a novel variable selection approach to process multivariate calibration. The CARS was first used to select informative variables, and then SPA to refine the variables with minimum redundant information. The proposed method was applied to near-infrared (NIR) reflectance data of nicotine in tobacco lamina and NIR transmission data of active ingredient in pesticide formulation. As a result, fewer but more informative variables were selected by CARS-SPA than by direct CARS. In the system of pesticide formulation, a multiple linear regression (MLR) model using variables selected by CARS-SPA provided a better prediction than the full-range partial least-squares (PLS) model, successive projections algorithm (SPA) model and uninformative variables elimination-successive projections algorithm (UVE-SPA) processed model. The variable subsets selected by CARS-SPA included the spectral ranges with sufficient chemical information, whereas the uninformative variables were hardly selected.

  1. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. PMID:25284135

  2. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments.

  3. Efficient reference-free adaptive artifact cancellers for impedance cardiography based remote health care monitoring systems.

    PubMed

    Mallam, Madhavi; Rao, K Chandra Bhutan

    2016-01-01

    In this paper, a new model for adaptive artifact cancelation in impedance cardiography (ICG) signals is presented. It is a hybrid model based on wavelet decomposition and an adaptive filter. A novel feature of this model is the implementation of reference-free adaptive artifact cancellers (AAC). For this implementation, the reference signal is constructed using a wavelet transformation. During critical conditions the filter weights may be negative and cause an imbalance in the convergence. To overcome this problem, we introduce non-negative adaptive algorithms in the proposed artifact canceller. To accelerate the performance of the AAC, we propose exponential non-negative and normalized non-negative algorithms to update the filter coefficients. The computational complexity of the filtering section in a remote health care system is important to avoid inter-symbol interference of the incoming samples. This can be achieved by combining sign-based algorithms with the adaptive filtering section. Finally, several AACs are developed using variants of the non-negative algorithms and performance measures are computed and compared. All of the proposed AACs are tested on actual ICG signals. Among the AACs evaluated, sign regressor normalized non-negative LMS (SRN(3)LMS) based adaptive artifact canceller achieves highest signal to noise ratio (SNR). The SNR achieved by this algorithm in baseline wander artifact elimination is 8.5312 dBs, in electrode muscle artifact elimination is 7.5908 dBs and in impedance measurement artifact elimination is 8.4231 dBs. PMID:27386256

  4. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  5. AMGET, an R-Based Postprocessing Tool for ADAPT 5

    PubMed Central

    Guiastrennec, B; Wollenberg, L; Forrest, A; Ait-Oudhia, S

    2013-01-01

    ADAPT 5 is a powerful modeling software for population pharmacokinetic and pharmacodynamic systems analysis, but provides limited built-in functionality for creating pre- and post-analysis diagnostic plots. ADAPT 5 Model Evaluation Graphical Toolkit (AMGET), an external package written in the open source R programming language, was developed specifically to support efficient postprocessing of ADAPT 5 runs, as well as NONMEM and S-ADAPT runs. Using interactive navigational menus, users of AMGET are able to rapidly create informative diagnostic plots enriched by the display of numerical and graphical elements with a high degree of customization using a simple settings spreadsheet. This article describes each feature of the AMGET package and illustrates how it allows users to utilize the powerful numerical routines of the ADAPT 5 package in a more efficient manner through the use of a simulated dataset and a simple pharmacokinetic model optimized using the maximum likelihood expectation maximization (MLEM) algorithm of ADAPT 5. PMID:23903464

  6. Adaptive sparse polynomial chaos expansion based on least angle regression

    NASA Astrophysics Data System (ADS)

    Blatman, Géraud; Sudret, Bruno

    2011-03-01

    Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called polynomial chaos) basis. The number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e. of Galerkin type) or non intrusive) unaffordable when the deterministic finite element model is expensive to evaluate. To address such problems, the paper describes a non intrusive method that builds a sparse PC expansion. First, an original strategy for truncating the PC expansions, based on hyperbolic index sets, is proposed. Then an adaptive algorithm based on least angle regression (LAR) is devised for automatically detecting the significant coefficients of the PC expansion. Beside the sparsity of the basis, the experimental design used at each step of the algorithm is systematically complemented in order to avoid the overfitting phenomenon. The accuracy of the PC metamodel is checked using an estimate inspired by statistical learning theory, namely the corrected leave-one-out error. As a consequence, a rather small number of PC terms are eventually retained ( sparse representation), which may be obtained at a reduced computational cost compared to the classical "full" PC approximation. The convergence of the algorithm is shown on an analytical function. Then the method is illustrated on three stochastic finite element problems. The first model features 10 input random variables, whereas the two others involve an input random field, which is discretized into 38 and 30 - 500 random variables, respectively.

  7. Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm

    SciTech Connect

    Veiga, Catarina Royle, Gary; Lourenço, Ana Mónica; Mouinuddin, Syed; Herk, Marcel van; Modat, Marc; Ourselin, Sébastien; McClelland, Jamie R.

    2015-02-15

    Purpose: The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping. Methods: The authors describe a DIR based adaptive radiotherapy workflow, using CT and cone-beam CT (CBCT) imaging. The transformations that mapped the anatomy between the two time points were obtained using four different DIR approaches available in NiftyReg. These included a standard unidirectional algorithm and more sophisticated bidirectional ones that encourage or ensure inverse consistency. The forward (CT-to-CBCT) deformation vector fields (DVFs) were used to propagate the CT Hounsfield units and structures to the daily geometry for “dose of the day” calculations, while the backward (CBCT-to-CT) DVFs were used to remap the dose of the day onto the planning CT (pCT). Data from five head and neck patients were used to evaluate the performance of each implementation based on geometrical matching, physical properties of the DVFs, and similarity between warped dose distributions. Geometrical matching was verified in terms of dice similarity coefficient (DSC), distance transform, false positives, and false negatives. The physical properties of the DVFs were assessed calculating the harmonic energy, determinant of the Jacobian, and inverse consistency error of the transformations. Dose distributions were displayed on the pCT dose space and compared using dose difference (DD), distance to dose difference, and dose volume histograms. Results: All the DIR algorithms gave similar results in terms of geometrical matching, with an average DSC of 0.85 ± 0.08, but the underlying properties of the DVFs varied in terms of smoothness and inverse consistency. When comparing the doses warped by different algorithms, we found a root mean square DD of 1.9% ± 0.8% of the prescribed dose (pD) and that an average of 9% ± 4% of

  8. Tone reproduction for high-dynamic range imaging based on adaptive filtering

    NASA Astrophysics Data System (ADS)

    Ha, Changwoo; Lee, Joohyun; Jeong, Jechang

    2014-03-01

    A tone reproduction algorithm with enhanced contrast of high-dynamic range images on conventional low-dynamic range display devices is presented. The proposed algorithm consists mainly of block-based parameter estimation, a characteristic-based luminance adjustment, and an adaptive Gaussian filter using minimum description length. Instead of relying only on the reduction of the dynamic range, a characteristic-based luminance adjustment process modifies the luminance values. The Gaussian-filtered luminance value is obtained from appropriate value of variance, and the contrast is then enhanced through the use of a relation between the adjusted luminance and Gaussian-filtered luminance values. In the final tone-reproduction process, the proposed algorithm combines color and luminance components in order to preserve the color consistency. The experimental results demonstrate that the proposed algorithm achieves a good subjective quality while enhancing the contrast of the image details.

  9. Learning algorithms for feedforward networks based on finite samples

    SciTech Connect

    Rao, N.S.V.; Protopopescu, V.; Mann, R.C.; Oblow, E.M.; Iyengar, S.S.

    1994-09-01

    Two classes of convergent algorithms for learning continuous functions (and also regression functions) that are represented by feedforward networks, are discussed. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods. Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.

  10. Non-equilibrium molecular dynamics simulation of nanojet injection with adaptive-spatial decomposition parallel algorithm.

    PubMed

    Shin, Hyun-Ho; Yoon, Woong-Sup

    2008-07-01

    An Adaptive-Spatial Decomposition parallel algorithm was developed to increase computation efficiency for molecular dynamics simulations of nano-fluids. Injection of a liquid argon jet with a scale of 17.6 molecular diameters was investigated. A solid annular platinum injector was also solved simultaneously with the liquid injectant by adopting a solid modeling technique which incorporates phantom atoms. The viscous heat was naturally discharged through the solids so the liquid boiling problem was avoided with no separate use of temperature controlling methods. Parametric investigations of injection speed, wall temperature, and injector length were made. A sudden pressure drop at the orifice exit causes flash boiling of the liquid departing the nozzle exit with strong evaporation on the surface of the liquids, while rendering a slender jet. The elevation of the injection speed and the wall temperature causes an activation of the surface evaporation concurrent with reduction in the jet breakup length and the drop size.

  11. Quantitative analysis of terahertz spectra for illicit drugs using adaptive-range micro-genetic algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Ma, Yong; Lu, Zheng; Peng, Bei; Chen, Qin

    2011-08-01

    In the field of anti-illicit drug applications, many suspicious mixture samples might consist of various drug components—for example, a mixture of methamphetamine, heroin, and amoxicillin—which makes spectral identification very difficult. A terahertz spectroscopic quantitative analysis method using an adaptive range micro-genetic algorithm with a variable internal population (ARVIPɛμGA) has been proposed. Five mixture cases are discussed using ARVIPɛμGA driven quantitative terahertz spectroscopic analysis in this paper. The devised simulation results show agreement with the previous experimental results, which suggested that the proposed technique has potential applications for terahertz spectral identifications of drug mixture components. The results show agreement with the results obtained using other experimental and numerical techniques.

  12. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  13. Extended Adaptive Biasing Force Algorithm. An On-the-Fly Implementation for Accurate Free-Energy Calculations.

    PubMed

    Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng

    2016-08-01

    Proper use of the adaptive biasing force (ABF) algorithm in free-energy calculations needs certain prerequisites to be met, namely, that the Jacobian for the metric transformation and its first derivative be available and the coarse variables be independent and fully decoupled from any holonomic constraint or geometric restraint, thereby limiting singularly the field of application of the approach. The extended ABF (eABF) algorithm circumvents these intrinsic limitations by applying the time-dependent bias onto a fictitious particle coupled to the coarse variable of interest by means of a stiff spring. However, with the current implementation of eABF in the popular molecular dynamics engine NAMD, a trajectory-based post-treatment is necessary to derive the underlying free-energy change. Usually, such a posthoc analysis leads to a decrease in the reliability of the free-energy estimates due to the inevitable loss of information, as well as to a drop in efficiency, which stems from substantial read-write accesses to file systems. We have developed a user-friendly, on-the-fly code for performing eABF simulations within NAMD. In the present contribution, this code is probed in eight illustrative examples. The performance of the algorithm is compared with traditional ABF, on the one hand, and the original eABF implementation combined with a posthoc analysis, on the other hand. Our results indicate that the on-the-fly eABF algorithm (i) supplies the correct free-energy landscape in those critical cases where the coarse variables at play are coupled to either each other or to geometric restraints or holonomic constraints, (ii) greatly improves the reliability of the free-energy change, compared to the outcome of a posthoc analysis, and (iii) represents a negligible additional computational effort compared to regular ABF. Moreover, in the proposed implementation, guidelines for choosing two parameters of the eABF algorithm, namely the stiffness of the spring and the mass

  14. Extended Adaptive Biasing Force Algorithm. An On-the-Fly Implementation for Accurate Free-Energy Calculations.

    PubMed

    Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng

    2016-08-01

    Proper use of the adaptive biasing force (ABF) algorithm in free-energy calculations needs certain prerequisites to be met, namely, that the Jacobian for the metric transformation and its first derivative be available and the coarse variables be independent and fully decoupled from any holonomic constraint or geometric restraint, thereby limiting singularly the field of application of the approach. The extended ABF (eABF) algorithm circumvents these intrinsic limitations by applying the time-dependent bias onto a fictitious particle coupled to the coarse variable of interest by means of a stiff spring. However, with the current implementation of eABF in the popular molecular dynamics engine NAMD, a trajectory-based post-treatment is necessary to derive the underlying free-energy change. Usually, such a posthoc analysis leads to a decrease in the reliability of the free-energy estimates due to the inevitable loss of information, as well as to a drop in efficiency, which stems from substantial read-write accesses to file systems. We have developed a user-friendly, on-the-fly code for performing eABF simulations within NAMD. In the present contribution, this code is probed in eight illustrative examples. The performance of the algorithm is compared with traditional ABF, on the one hand, and the original eABF implementation combined with a posthoc analysis, on the other hand. Our results indicate that the on-the-fly eABF algorithm (i) supplies the correct free-energy landscape in those critical cases where the coarse variables at play are coupled to either each other or to geometric restraints or holonomic constraints, (ii) greatly improves the reliability of the free-energy change, compared to the outcome of a posthoc analysis, and (iii) represents a negligible additional computational effort compared to regular ABF. Moreover, in the proposed implementation, guidelines for choosing two parameters of the eABF algorithm, namely the stiffness of the spring and the mass

  15. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  16. Flight Testing of the Space Launch System (SLS) Adaptive Augmenting Control (AAC) Algorithm on an F/A-18

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.; VanZwieten, Tannen S.; Hanson, Curtis E.; Wall, John H.; Miller, Chris J.; Gilligan, Eric T.; Orr, Jeb S.

    2014-01-01

    The Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an adaptive augmenting control (AAC) algorithm for launch vehicles that improves robustness and performance on an as-needed basis by adapting a classical control algorithm to unexpected environments or variations in vehicle dynamics. This was baselined as part of the Space Launch System (SLS) flight control system. The NASA Engineering and Safety Center (NESC) was asked to partner with the SLS Program and the Space Technology Mission Directorate (STMD) Game Changing Development Program (GCDP) to flight test the AAC algorithm on a manned aircraft that can achieve a high level of dynamic similarity to a launch vehicle and raise the technology readiness of the algorithm early in the program. This document reports the outcome of the NESC assessment.

  17. A Genetic Algorithm Based Augmented Lagrangian Method for Computationally Fast Constrained Optimization

    NASA Astrophysics Data System (ADS)

    Srivastava, Soumil; Deb, Kalyanmoy

    Among the penalty based approaches for constrained optimization, Augmented Lagrangian (AL) methods are better in at least three ways: (i) they have theoretical convergence properties, (ii) they distort the original objective function minimally to allow a better search behavior, and (iii) they can find the optimal Lagrange multiplier for each constraint as a by-product of optimization. Instead of keeping a constant penalty parameter throughout the optimization process, these algorithms update the parameters adaptively so that the corresponding penalized function dynamically changes its optimum from the unconstrained minimum point to the constrained minimum point with iterations. However, the flip side of these algorithms is that the overall algorithm is a serial implementation of a number of optimization tasks, a process that is usually time-consuming. In this paper, we devise a genetic algorithm based parameter update strategy to a particular AL method. The strategy is self-adaptive in order to make the overall genetic algorithm based augmented Lagrangian (GAAL) method parameter-free. The GAAL method is applied to a number of constrained test problems taken from the EA literature. The function evaluations required by GAAL in many problems is an order or more lower than existing methods.

  18. Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm

    NASA Astrophysics Data System (ADS)

    Hasal, Martin; Pospisil, Lukas; Nowakova, Jana

    2016-06-01

    Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.

  19. Fast diffraction computation algorithms based on FFT

    NASA Astrophysics Data System (ADS)

    Logofatu, Petre Catalin; Nascov, Victor; Apostol, Dan

    2010-11-01

    The discovery of the Fast Fourier transform (FFT) algorithm by Cooley and Tukey meant for diffraction computation what the invention of computers meant for computation in general. The computation time reduction is more significant for large input data, but generally FFT reduces the computation time with several orders of magnitude. This was the beginning of an entire revolution in optical signal processing and resulted in an abundance of fast algorithms for diffraction computation in a variety of situations. The property that allowed the creation of these fast algorithms is that, as it turns out, most diffraction formulae contain at their core one or more Fourier transforms which may be rapidly calculated using the FFT. The key in discovering a new fast algorithm is to reformulate the diffraction formulae so that to identify and isolate the Fourier transforms it contains. In this way, the fast scaled transformation, the fast Fresnel transformation and the fast Rayleigh-Sommerfeld transform were designed. Remarkable improvements were the generalization of the DFT to scaled DFT which allowed freedom to choose the dimensions of the output window for the Fraunhofer-Fourier and Fresnel diffraction, the mathematical concept of linearized convolution which thwarts the circular character of the discrete Fourier transform and allows the use of the FFT, and last but not least the linearized discrete scaled convolution, a new concept of which we claim priority.

  20. Raytracing Based upon the Sympletic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Li, C.

    2014-12-01

    The raytracing is the basic problem in seismic imaging, and the reliability of the imaging depends on the accuracies both spatial trajectory and traveltime of the ray, and is using in seismology broadly. The seismic ray travels through the inhomogeneous media fallows the the eikonal equation, and the eikonal equation is an one order differential equation of traveltime, and satisfies the Hamilton System. In Cartesian coordinate system, we use a separable Hamilton System function. In this paper, the Sympletic algorithm method with bi-cubic convolution algorithm was used to solve the Hamilton System to deal with the raytracing problem. Compared with the Fsat Marching Method (FMM), The result shows that the Sympletic algorithm method (SAM) can keep the stability of the solution for the eikonal equation. Due to the use of the Sympletic algorithm, the method can produce a reliable seismic wavefront with an accurate ray trajectory (Fig.1). Meanwhile, the numerical modeling shows that the use of SAM can not only keep the stability of the Hamilton System with a fast computation but also improve the accuracy of the seismic ray tracing (Fig.2).

  1. Function-Based Algorithms for Biological Sequences

    ERIC Educational Resources Information Center

    Mohanty, Pragyan Sheela P.

    2015-01-01

    Two problems at two different abstraction levels of computational biology are studied. At the molecular level, efficient pattern matching algorithms in DNA sequences are presented. For gene order data, an efficient data structure is presented capable of storing all gene re-orderings in a systematic manner. A common characteristic of presented…

  2. Analyzing Hedges in Verbal Communication: An Adaptation-Based Approach

    ERIC Educational Resources Information Center

    Wang, Yuling

    2010-01-01

    Based on Adaptation Theory, the article analyzes the production process of hedges. The procedure consists of the continuous making of choices in linguistic forms and communicative strategies. These choices are made just for adaptation to the contextual correlates. Besides, the adaptation process is dynamic, intentional and bidirectional.

  3. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    SciTech Connect

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  4. A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.

    PubMed

    Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel

    2015-03-01

    Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions. PMID:25056743

  5. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models

    PubMed Central

    Wise, S.M.; Lowengrub, J.S.; Cristini, V.

    2010-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  6. An Adaptive Multigrid Algorithm for Simulating Solid Tumor Growth Using Mixture Models.

    PubMed

    Wise, S M; Lowengrub, J S; Cristini, V

    2011-01-01

    In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies. PMID:21076663

  7. Adaptive Algorithm for Soil Mosture Retrieval in Agricultural and Mountainous Areas with High Resolution ASAR Images

    NASA Astrophysics Data System (ADS)

    Notarnicola, C.; Paloscia, S.; Pettinato, S.; Preziosa, G.; Santi, E.; Ventura, B.

    2010-12-01

    In this paper, extensive data sets of SAR images and related ground truth on three areas characterized by very different surface features have been analyzed in order to understand the ENVISAT/ASAR responses to different soil, environmental and seasonal conditions. The comparison of the backscattering coefficients in dependence of soil moisture values for all the analyzed datasets indicates the same sensitivity to soil moisture variations but with different biases, which may depend on soil characteristics, vegetation presence and roughness effect. A further comparison with historical data collected on bare soils with comparable roughness at the same frequency, polarization and incidence angle, confirmed that the different surface features affect the bias of the relationship, while the backscattering sensitivity to the SMC remains quite constant. These different biases values have been used to determine an adaptive term to be added in the electromagnetic formulation of the backscattering responses from natural surfaces, obtained by using the Integral Equation Model (IEM). The simulated data from this model have been then used to train a neural network as inversion algorithm. The paper will present the results from this new technique in comparison to neural network and Bayesian algorithms trained on one area and then tested on the other ones.

  8. A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.

    PubMed

    Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel

    2015-03-01

    Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.

  9. Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy

    SciTech Connect

    Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc; Létourneau, Mélanie; Fenster, Aaron; Pouliot, Jean

    2013-11-15

    Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then be generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the

  10. Algorithms for adaptive stochastic control for a class of linear systems

    NASA Technical Reports Server (NTRS)

    Toda, M.; Patel, R. V.

    1977-01-01

    Control of linear, discrete time, stochastic systems with unknown control gain parameters is discussed. Two suboptimal adaptive control schemes are derived: one is based on underestimating future control and the other is based on overestimating future control. Both schemes require little on-line computation and incorporate in their control laws some information on estimation errors. The performance of these laws is studied by Monte Carlo simulations on a computer. Two single input, third order systems are considered, one stable and the other unstable, and the performance of the two adaptive control schemes is compared with that of the scheme based on enforced certainty equivalence and the scheme where the control gain parameters are known.

  11. Adaptive mechanism-based congestion control for networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Zhang, Yun; Chen, C. L. Philip

    2013-03-01

    In order to assure the communication quality in network systems with heavy traffic and limited bandwidth, a new ATRED (adaptive thresholds random early detection) congestion control algorithm is proposed for the congestion avoidance and resource management of network systems. Different to the traditional AQM (active queue management) algorithms, the control parameters of ATRED are not configured statically, but dynamically adjusted by the adaptive mechanism. By integrating with the adaptive strategy, ATRED alleviates the tuning difficulty of RED (random early detection) and shows a better control on the queue management, and achieve a more robust performance than RED under varying network conditions. Furthermore, a dynamic transmission control protocol-AQM control system using ATRED controller is introduced for the systematic analysis. It is proved that the stability of the network system can be guaranteed when the adaptive mechanism is finely designed. Simulation studies show the proposed ATRED algorithm achieves a good performance in varying network environments, which is superior to the RED and Gentle-RED algorithm, and providing more reliable service under varying network conditions.

  12. Particle Swarm Based Collective Searching Model for Adaptive Environment

    SciTech Connect

    Cui, Xiaohui; Patton, Robert M; Potok, Thomas E; Treadwell, Jim N

    2008-01-01

    This report presents a pilot study of an integration of particle swarm algorithm, social knowledge adaptation and multi-agent approaches for modeling the collective search behavior of self-organized groups in an adaptive environment. The objective of this research is to apply the particle swarm metaphor as a model of social group adaptation for the dynamic environment and to provide insight and understanding of social group knowledge discovering and strategic searching. A new adaptive environment model, which dynamically reacts to the group collective searching behaviors, is proposed in this research. The simulations in the research indicate that effective communication between groups is not the necessary requirement for whole self-organized groups to achieve the efficient collective searching behavior in the adaptive environment.

  13. Particle Swarm Based Collective Searching Model for Adaptive Environment

    SciTech Connect

    Cui, Xiaohui; Patton, Robert M; Potok, Thomas E; Treadwell, Jim N

    2007-01-01

    This report presents a pilot study of an integration of particle swarm algorithm, social knowledge adaptation and multi-agent approaches for modeling the collective search behavior of self-organized groups in an adaptive environment. The objective of this research is to apply the particle swarm metaphor as a model of social group adaptation for the dynamic environment and to provide insight and understanding of social group knowledge discovering and strategic searching. A new adaptive environment model, which dynamically reacts to the group collective searching behaviors, is proposed in this research. The simulations in the research indicate that effective communication between groups is not the necessary requirement for whole self-organized groups to achieve the efficient collective searching behavior in the adaptive environment.

  14. Parallelization of exoplanets detection algorithms based on field rotation; example of the MOODS algorithm for SPHERE

    NASA Astrophysics Data System (ADS)

    Mattei, D.; Smith, I.; Ferrari, A.; Carbillet, M.

    2010-10-01

    Post-processing for exoplanet detection using direct imaging requires large data cubes and/or sophisticated signal processing technics. For alt-azimuthal mounts, a projection effect called field rotation makes the potential planet rotate in a known manner on the set of images. For ground based telescopes that use extreme adaptive optics and advanced coronagraphy, technics based on field rotation are already broadly used and still under progress. In most such technics, for a given initial position of the planet the planet intensity estimate is a linear function of the set of images. However, due to field rotation the modified instrumental response applied is not shift invariant like usual linear filters. Testing all possible initial positions is therefore very time-consuming. To reduce the time process, we propose to deal with each subset of initial positions computed on a different machine using parallelization programming. In particular, the MOODS algorithm dedicated to the VLT-SPHERE instrument, that estimates jointly the light contributions of the star and the potential exoplanet, is parallelized on the Observatoire de la Cote d'Azur cluster. Different parallelization methods (OpenMP, MPI, Jobs Array) have been elaborated for the initial MOODS code and compared to each other. The one finally chosen splits the initial positions on the processors available by accounting at best for the different constraints of the cluster structure: memory, job submission queues, number of available CPUs, cluster average load. At the end, a standard set of images is satisfactorily processed in a few hours instead of a few days.

  15. Broad-area search for targets in SAR imagery with context-adaptive algorithms

    NASA Astrophysics Data System (ADS)

    Patterson, Tim J.; Fairchild, Scott R.

    1996-06-01

    This paper describes an ATR system based on gray scale morphology which has proven very effective in performing broad area search for targets of interest. Gray scale morphology is used to extract several distinctive sets of features which combine intensity and spatial information. Results of direct comparisons with other algorithms are presented. In a series of tests which were scored independently the morphological approach has shown superior results. An automated training systems based on a combination of genetic algorithms and classification and regression trees is described. Further performance gains are expected by allowing context sensitive selection of parameter sets for the morphological processing. Context is acquired from the image using texture measures to identify the local clutter environment. The system is designed to be able to build new classifiers on the fly to match specific image to image variations.

  16. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  17. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  18. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  19. A Vehicle Detection Algorithm Based on Deep Belief Network

    PubMed Central

    Cai, Yingfeng; Chen, Long

    2014-01-01

    Vision based vehicle detection is a critical technology that plays an important role in not only vehicle active safety but also road video surveillance application. Traditional shallow model based vehicle detection algorithm still cannot meet the requirement of accurate vehicle detection in these applications. In this work, a novel deep learning based vehicle detection algorithm with 2D deep belief network (2D-DBN) is proposed. In the algorithm, the proposed 2D-DBN architecture uses second-order planes instead of first-order vector as input and uses bilinear projection for retaining discriminative information so as to determine the size of the deep architecture which enhances the success rate of vehicle detection. On-road experimental results demonstrate that the algorithm performs better than state-of-the-art vehicle detection algorithm in testing data sets. PMID:24959617

  20. Robust facial expression recognition algorithm based on local metric learning

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  1. Image-classification-based global dimming algorithm for LED backlights in LCDs

    NASA Astrophysics Data System (ADS)

    Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv

    2015-07-01

    Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.

  2. Comparison of Beam-Based Alignment Algorithms for the ILC

    SciTech Connect

    Smith, J.C.; Gibbons, L.; Patterson, J.R.; Rubin, D.L.; Sagan, D.; Tenenbaum, P.; /SLAC

    2006-03-15

    The main linac of the International Linear Collider (ILC) requires more sophisticated alignment techniques than those provided by survey alone. Various Beam-Based Alignment (BBA) algorithms have been proposed to achieve the desired low emittance preservation. Dispersion Free Steering, Ballistic Alignment and the Kubo method are compared. Alignment algorithms are also tested in the presence of an Earth-like stray field.

  3. Multichannel Speech Enhancement Based on Generalized Gamma Prior Distribution with Its Online Adaptive Estimation

    NASA Astrophysics Data System (ADS)

    Dat, Tran Huy; Takeda, Kazuya; Itakura, Fumitada

    We present a multichannel speech enhancement method based on MAP speech spectral magnitude estimation using a generalized gamma model of speech prior distribution, where the model parameters are adapted from actual noisy speech in a frame-by-frame manner. The utilization of a more general prior distribution with its online adaptive estimation is shown to be effective for speech spectral estimation in noisy environments. Furthermore, the multi-channel information in terms of cross-channel statistics are shown to be useful to better adapt the prior distribution parameters to the actual observation, resulting in better performance of speech enhancement algorithm. We tested the proposed algorithm in an in-car speech database and obtained significant improvements of the speech recognition performance, particularly under non-stationary noise conditions such as music, air-conditioner and open window.

  4. A danger-theory-based immune network optimization algorithm.

    PubMed

    Zhang, Ruirui; Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times.

  5. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  6. Embedded assessment algorithms within home-based cognitive computer game exercises for elders.

    PubMed

    Jimison, Holly; Pavel, Misha

    2006-01-01

    With the recent consumer interest in computer-based activities designed to improve cognitive performance, there is a growing need for scientific assessment algorithms to validate the potential contributions of cognitive exercises. In this paper, we present a novel methodology for incorporating dynamic cognitive assessment algorithms within computer games designed to enhance cognitive performance. We describe how this approach works for variety of computer applications and describe cognitive monitoring results for one of the computer game exercises. The real-time cognitive assessments also provide a control signal for adapting the difficulty of the game exercises and providing tailored help for elders of varying abilities.

  7. Parallel level-set methods on adaptive tree-based grids

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic

    2016-10-01

    We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.

  8. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  9. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  10. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  11. Enhancing artificial bee colony algorithm with self-adaptive searching strategy and artificial immune network operators for global optimization.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  12. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    PubMed Central

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  13. Adaptive muffler based on controlled flow valves.

    PubMed

    Šteblaj, Peter; Čudina, Mirko; Lipar, Primož; Prezelj, Jurij

    2015-06-01

    An adaptive muffler with a flexible internal structure is considered. Flexibility is achieved using controlled flow valves. The proposed adaptive muffler is able to adapt to changes in engine operating conditions. It consists of a Helmholtz resonator, expansion chamber, and quarter wavelength resonator. Different combinations of the control valves' states at different operating conditions define the main working principle. To control the valve's position, an active noise control approach was used. With the proposed muffler, the transmission loss can be increased by more than 10 dB in the selected frequency range. PMID:26093462

  14. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    PubMed

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-01-01

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382

  15. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

    PubMed Central

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-01-01

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382

  16. Self-calibration algorithm in an asynchronous P300-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Schettini, F.; Aloise, F.; Aricò, P.; Salinari, S.; Mattia, D.; Cincotti, F.

    2014-06-01

    Objective. Reliability is a desirable characteristic of brain-computer interface (BCI) systems when they are intended to be used under non-experimental operating conditions. In addition, their overall usability is influenced by the complex and frequent procedures that are required for configuration and calibration. Earlier studies examined the issue of asynchronous control in P300-based BCIs, introducing dynamic stopping and automatic control suspension features. This report proposes and evaluates an algorithm for the automatic recalibration of the classifier's parameters using unsupervised data. Approach. Ten healthy subjects participated in five P300-based BCI sessions throughout a single day. First, we examined whether continuous adaptation of control parameters improved the accuracy of the asynchronous system over time. Then, we assessed the performance of the self-calibration algorithm with respect to the no-recalibration and supervised calibration conditions with regard to system accuracy and communication efficiency. Main results. Offline tests demonstrated that continuous adaptation of the control parameters significantly increased the communication efficiency of asynchronous P300-based BCIs. The self-calibration algorithm correctly assigned labels to unsupervised data with 95% accuracy, effecting communication efficiency that was comparable with that of supervised repeated calibration. Significance. Although additional online tests that involve end-users under non-experimental conditions are needed, these preliminary results are encouraging, from which we conclude that the self-calibration algorithm is a promising solution to improve P300-based BCI usability and reliability.

  17. Adapting iterative algorithms for solving large sparse linear systems for efficient use on the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Kincaid, D. R.; Young, D. M.

    1984-01-01

    Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.

  18. Development of an Adaptive Multi-Method Algorithm for Automatic Picking of First Arrival Times: Application to Near Surface Seismic Data

    NASA Astrophysics Data System (ADS)

    Khalaf, A.; Camerlynck, C. M.; Schneider, A. C.; Florsch, N.

    2015-12-01

    Accurate picking of first arrival times plays an important role in many seismic studies, particularly in seismic tomography and reservoirs or aquifers monitoring. Many techniques have been developed for picking first arrivals automatically or semi-automatically, but most of them were developed for seismological purposes which does not attain the accuracy objectives due to the complexity of near surface structures, and to usual low signal-to-noise ratio. We propose a new adaptive algorithm for near surface data based on three picking methods, combining multi-nested windows (MNW), Higher Order Statistics (HOS), and Akaike Information Criterion (AIC). They exploit the benefits of integrating many properties, which reveal the presence of first arrivals, to provide an efficient and robust first arrivals picking. This strategy mimics the human first-break picking, where at the beginning the global trend is defined. Then the exact first-breaks are searched in the vicinity of the now defined trend. In a multistage algorithm, three successive phases are launched, where each of them characterize a specific signal property. Within each phase, the potential picks and their error range are automatically estimated, and then used sequentially as leader in the following phase picking. The accuracy and robustness of the implemented algorithm are successfully validated on synthetic and real data which have special challenges for automatic pickers. The comparison of resulting P-wave arrival times with those picked manually, and other algorithms of automatic picking, demonstrated the reliable performance of the new scheme under different noisy conditions. All parameters of our multi-method algorithm are auto-adaptive thanks to the integration in series of each sub-algorithm results in the flow. Hence, it is nearly a parameter-free algorithm, which is straightforward to implement and demands low computational resources.

  19. Wireless rake-receiver using adaptive filter with a family of partial update algorithms in noise cancellation applications

    NASA Astrophysics Data System (ADS)

    Fayadh, Rashid A.; Malek, F.; Fadhil, Hilal A.; Aldhaibani, Jaafar A.; Salman, M. K.; Abdullah, Farah Salwani

    2015-05-01

    For high data rate propagation in wireless ultra-wideband (UWB) communication systems, the inter-symbol interference (ISI), multiple-access interference (MAI), and multiple-users interference (MUI) are influencing the performance of the wireless systems. In this paper, the rake-receiver was presented with the spread signal by direct sequence spread spectrum (DS-SS) technique. The adaptive rake-receiver structure was shown with adjusting the receiver tap weights using least mean squares (LMS), normalized least mean squares (NLMS), and affine projection algorithms (APA) to support the weak signals by noise cancellation and mitigate the interferences. To minimize the data convergence speed and to reduce the computational complexity by the previous algorithms, a well-known approach of partial-updates (PU) adaptive filters were employed with algorithms, such as sequential-partial, periodic-partial, M-max-partial, and selective-partial updates (SPU) in the proposed system. The simulation results of bit error rate (BER) versus signal-to-noise ratio (SNR) are illustrated to show the performance of partial-update algorithms that have nearly comparable performance with the full update adaptive filters. Furthermore, the SPU-partial has closed performance to the full-NLMS and full-APA while the M-max-partial has closed performance to the full-LMS updates algorithms.

  20. Finite-approximation-error-based discrete-time iterative adaptive dynamic programming.

    PubMed

    Wei, Qinglai; Wang, Fei-Yue; Liu, Derong; Yang, Xiong

    2014-12-01

    In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The generalized value iteration algorithm permits an arbitrary positive semi-definite function to initialize it, which overcomes the disadvantage of traditional value iteration algorithms. When the iterative control law and iterative performance index function in each iteration cannot accurately be obtained, for the first time a new "design method of the convergence criteria" for the finite-approximation-error-based generalized value iteration algorithm is established. A suitable approximation error can be designed adaptively to make the iterative performance index function converge to a finite neighborhood of the optimal performance index function. Neural networks are used to implement the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the developed method. PMID:25265640