Science.gov

Sample records for optimized particle filter

  1. An optimization-based parallel particle filter for multitarget tracking

    NASA Astrophysics Data System (ADS)

    Sutharsan, S.; Sinha, A.; Kirubarajan, T.; Farooq, M.

    2005-09-01

    Particle filter based estimation is becoming more popular because it has the capability to effectively solve nonlinear and non-Gaussian estimation problems. However, the particle filter has high computational requirements and the problem becomes even more challenging in the case of multitarget tracking. In order to perform data association and estimation jointly, typically an augmented state vector of target dynamics is used. As the number of targets increases, the computation required for each particle increases exponentially. Thus, parallelization is a possibility in order to achieve the real time feasibility in large-scale multitarget tracking applications. In this paper, we present a real-time feasible scheduling algorithm that minimizes the total computation time for the bus connected heterogeneous primary-secondary architecture. This scheduler is capable of selecting the optimal number of processors from a large pool of secondary processors and mapping the particles among the selected processors. Furthermore, we propose a less communication intensive parallel implementation of the particle filter without sacrificing tracking accuracy using an efficient load balancing technique, in which optimal particle migration is ensured. In this paper, we present the mathematical formulations for scheduling the particles as well as for particle migration via load balancing. Simulation results show the tracking performance of our parallel particle filter and the speedup achieved using parallelization.

  2. Clever particle filters, sequential importance sampling and the optimal proposal

    NASA Astrophysics Data System (ADS)

    Snyder, Chris

    2014-05-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.

  3. Algorithmic and architectural optimizations for computationally efficient particle filtering.

    PubMed

    Sankaranarayanan, Aswin C; Srivastava, Ankur; Chellappa, Rama

    2008-05-01

    In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speed-up of the algorithm using the methodology proposed in the paper. PMID:18390378

  4. Optimizing Parameters of Process-Based Terrestrial Ecosystem Model with Particle Filter

    NASA Astrophysics Data System (ADS)

    Ito, A.

    2014-12-01

    Present terrestrial ecosystem models still contain substantial uncertainties, as model intercomparison studies have shown, because of poor model constraint by observational data. So, development of advanced methodology of data-model fusion, or data-assimilation, is an important task to reduce the uncertainties and improve model predictability. In this study, I apply the Particle filter (or Sequential Monte Carlo filer) to optimize parameters of a process-based terrestrial ecosystem model (VISIT). The Particle filter is one of the data-assimilation methods, in which probability distribution of model state is approximated by many samples of parameter set (i.e., particle). This is a computationally intensive method and applicable to nonlinear systems; this is an advantage of the method in comparison with other techniques like Ensemble Kalman filter and variational method. At several sites, I used flux measurement data of atmosphere-ecosystem CO2 exchange in sequential and non-sequential manners. In the sequential data assimilation, a time-series data at 30-min or daily steps were used to optimize gas-exchange-related parameters; this method would be also effective to assimilate satellite observational data. On the other hand, in the non-sequential case, annual or long-term mean budget was adjusted to observations; this method would be also effective to assimilate carbon stock data. Although there remain technical issues (e.g., appropriate number of particles and likelihood function), I demonstrate that the Partile filter is an effective method of data-assimilation for process-based models, enhancing collaboration between field and model researchers.

  5. Multilevel Ensemble Transform Particle Filtering

    NASA Astrophysics Data System (ADS)

    Gregory, Alastair; Cotter, Colin; Reich, Sebastian

    2016-04-01

    This presentation extends the Multilevel Monte Carlo variance reduction technique to nonlinear filtering. In particular, Multilevel Monte Carlo is applied to a certain variant of the particle filter, the Ensemble Transform Particle Filter (ETPF). A key aspect is the use of optimal transport methods to re-establish correlation between coarse and fine ensembles after resampling; this controls the variance of the estimator. Numerical examples present a proof of concept of the effectiveness of the proposed method, demonstrating significant computational cost reductions (relative to the single-level ETPF counterpart) in the propagation of ensembles.

  6. Research on improved mechanism for particle filter

    NASA Astrophysics Data System (ADS)

    Yu, Jinxia; Xu, Jingmin; Tang, Yongli; Zhao, Qian

    2013-03-01

    Based on the analysis of particle filter algorithm, two improved mechanism are studied so as to improve the performance of particle filter. Firstly, hybrid proposal distribution with annealing parameter is studied in order to use current information of the latest observed measurement to optimize particle filter. Then, resampling step in particle filter is improved by two methods which are based on partial stratified resampling (PSR). One is that it uses the optimal idea to improve the weights after implementing PSR, and the other is that it uses the optimal idea to improve the weights before implementing PSR and uses adaptive mutation operation for all particles so as to assure the diversity of particle sets after PSR. At last, the simulations based on single object tracking are implemented, and the performances of the improved mechanism for particle filter are estimated.

  7. Chaos particle swarm optimization combined with circular median filtering for geophysical parameters retrieval from Windsat

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Wang, Zhenzhan; Shi, Hanqing; Long, Zhiyong; Du, Huadong

    2016-08-01

    This paper established a geophysical retrieval algorithm for sea surface wind vector, sea surface temperature, columnar atmospheric water vapor, and columnar cloud liquid water from WindSat, using the measured brightness temperatures and a matchup database. To retrieve the wind vector, a chaotic particle swarm approach was used to determine a set of possible wind vector solutions which minimize the difference between the forward model and the WindSat observations. An adjusted circular median filtering function was adopted to remove wind direction ambiguity. The validation of the wind speed, wind direction, sea surface temperature, columnar atmospheric water vapor, and columnar liquid cloud water indicates that this algorithm is feasible and reasonable and can be used to retrieve these atmospheric and oceanic parameters. Compared with moored buoy data, the RMS errors for wind speed and sea surface temperature were 0.92 m s-1 and 0.88°C, respectively. The RMS errors for columnar atmospheric water vapor and columnar liquid cloud water were 0.62 mm and 0.01 mm, respectively, compared with F17 SSMIS results. In addition, monthly average results indicated that these parameters are in good agreement with AMSR-E results. Wind direction retrieval was studied under various wind speed conditions and validated by comparing to the QuikSCAT measurements, and the RMS error was 13.3°. This paper offers a new approach to the study of ocean wind vector retrieval using a polarimetric microwave radiometer.

  8. Bounds on the performance of particle filters

    NASA Astrophysics Data System (ADS)

    Snyder, C.; Bengtsson, T.

    2014-12-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. A persistent issue with all particle filters is degeneracy of the importance weights, where one or a few particles receive almost all the weight. Considering single-step filters such as the equivalent-weights or implicit particle filters (that is, those in which the particles and weights at time tk depend only on the observations at tk and the particles and weights at tk-1), two results provide a bound on their performance. First, the optimal proposal minimizes the variance of the importance weights not only over draws of the particles at tk, but also over draws from the joint proposal for tk-1 and tk. This shows that a particle filter using the optimal proposal will have minimal degeneracy relative to all other single-step filters. Second, the asymptotic results of Bengtsson et al. (2008) and Snyder et al. (2008) also hold rigorously for the optimal proposal in the case of linear, Gaussian systems. The number of particles necessary to avoid degeneracy must increase exponentially with the variance of the incremental importance weights. In the simplest examples, that variance is proportional to the dimension of the system, though in general it depends on other factors, including the characteristics of the observing network. A rough estimate indicates that single-step particle filter applied to global numerical weather prediction will require very large numbers of particles.

  9. Adaptive particle filtering

    NASA Astrophysics Data System (ADS)

    Stevens, Mark R.; Gutchess, Dan; Checka, Neal; Snorrason, Magnús

    2006-05-01

    Image exploitation algorithms for Intelligence, Surveillance and Reconnaissance (ISR) and weapon systems are extremely sensitive to differences between the operating conditions (OCs) under which they are trained and the extended operating conditions (EOCs) in which the fielded algorithms are tested. As an example, terrain type is an important OC for the problem of tracking hostile vehicles from an airborne camera. A system designed to track cars driving on highways and on major city streets would probably not do well in the EOC of parking lots because of the very different dynamics. In this paper, we present a system we call ALPS for Adaptive Learning in Particle Systems. ALPS takes as input a sequence of video images and produces labeled tracks. The system detects moving targets and tracks those targets across multiple frames using a multiple hypothesis tracker (MHT) tightly coupled with a particle filter. This tracker exploits the strengths of traditional MHT based tracking algorithms by directly incorporating tree-based hypothesis considerations into the particle filter update and resampling steps. We demonstrate results in a parking lot domain tracking objects through occlusions and object interactions.

  10. Nonlinear optimal semirecursive filtering

    NASA Astrophysics Data System (ADS)

    Daum, Frederick E.

    1996-05-01

    This paper describes a new hybrid approach to filtering, in which part of the filter is recursive but another part in non-recursive. The practical utility of this notion is to reduce computational complexity. In particular, if the non- recursive part of the filter is sufficiently small, then such a filter might be cost-effective to run in real-time with computer technology available now or in the future.

  11. Optimization of integrated polarization filters.

    PubMed

    Gagnon, Denis; Dumont, Joey; Déziel, Jean-Luc; Dubé, Louis J

    2014-10-01

    This study reports on the design of small footprint, integrated polarization filters based on engineered photonic lattices. Using a rods-in-air lattice as a basis for a TE filter and a holes-in-slab lattice for the analogous TM filter, we are able to maximize the degree of polarization of the output beams up to 98% with a transmission efficiency greater than 75%. The proposed designs allow not only for logical polarization filtering, but can also be tailored to output an arbitrary transverse beam profile. The lattice configurations are found using a recently proposed parallel tabu search algorithm for combinatorial optimization problems in integrated photonics. PMID:25360980

  12. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert

    1998-04-30

    Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based

  13. Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw

    2002-01-01

    The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.

  14. Particle flow for nonlinear filters with log-homotopy

    NASA Astrophysics Data System (ADS)

    Daum, Fred; Huang, Jim

    2008-04-01

    We describe a new nonlinear filter that is vastly superior to the classic particle filter. In particular, the computational complexity of the new filter is many orders of magnitude less than the classic particle filter with optimal estimation accuracy for problems with dimension greater than 2 or 3. We consider nonlinear estimation problems with dimensions varying from 1 to 20 that are smooth and fully coupled (i.e. dense not sparse). The new filter implements Bayes' rule using particle flow rather than with a pointwise multiplication of two functions; this avoids one of the fundamental and well known problems in particle filters, namely "particle collapse" as a result of Bayes' rule. We use a log-homotopy to derive the ODE that describes particle flow. This paper was written for normal engineers, who do not have homotopy for breakfast.

  15. Distributed SLAM using improved particle filter for mobile robot localization.

    PubMed

    Pei, Fujun; Wu, Mei; Zhang, Simin

    2014-01-01

    The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness. PMID:24883362

  16. Distributed SLAM Using Improved Particle Filter for Mobile Robot Localization

    PubMed Central

    Pei, Fujun; Wu, Mei; Zhang, Simin

    2014-01-01

    The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness. PMID:24883362

  17. Particle Swarm Optimization Toolbox

    NASA Technical Reports Server (NTRS)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry

  18. Westinghouse Advanced Particle Filter System

    SciTech Connect

    Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.; Bachovchin, D.M.

    1996-12-31

    Integrated Gasification Combined Cycles (IGCC) and Pressurized Fluidized Bed Combustion (PFBC) are being developed and demonstrated for commercial, power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC and PFBC in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of PFBC and IGCC systems. This paper reports on the development and status of testing of the Westinghouse Advanced Hot Gas Particle Filter (W-APF) including: W-APF integrated operation with the American Electric Power, 70 MW PFBC clean coal facility--approximately 6000 test hours completed; approximately 2500 hours of testing at the Hans Ahlstrom 10 MW PCFB facility located in Karhula, Finland; over 700 hours of operation at the Foster Wheeler 2 MW 2nd generation PFBC facility located in Livingston, New Jersey; status of Westinghouse HGF supply for the DOE Southern Company Services Power System Development Facility (PSDF) located in Wilsonville, Alabama; the status of the Westinghouse development and testing of HGF`s for Biomass Power Generation; and the status of the design and supply of the HGF unit for the 95 MW Pinon Pine IGCC Clean Coal Demonstration.

  19. System and Apparatus for Filtering Particles

    NASA Technical Reports Server (NTRS)

    Agui, Juan H. (Inventor); Vijayakumar, Rajagopal (Inventor)

    2015-01-01

    A modular pre-filtration apparatus may be beneficial to extend the life of a filter. The apparatus may include an impactor that can collect a first set of particles in the air, and a scroll filter that can collect a second set of particles in the air. A filter may follow the pre-filtration apparatus, thus causing the life of the filter to be increased.

  20. Angle only tracking with particle flow filters

    NASA Astrophysics Data System (ADS)

    Daum, Fred; Huang, Jim

    2011-09-01

    We show the results of numerical experiments for tracking ballistic missiles using only angle measurements. We compare the performance of an extended Kalman filter with a new nonlinear filter using particle flow to compute Bayes' rule. For certain difficult geometries, the particle flow filter is an order of magnitude more accurate than the EKF. Angle only tracking is of interest in several different sensors; for example, passive optics and radars in which range and Doppler data are spoiled by jamming.

  1. Early maritime applications of particle filtering

    NASA Astrophysics Data System (ADS)

    Richardson, Henry R.; Stone, Lawrence D.; Monach, W. Reynolds; Discenza, Joseph H.

    2003-12-01

    This paper provides a brief history of some operational particle filters that were used by the U. S. Coast Guard and U. S. Navy. Starting in 1974 the Coast Guard system provided Search and Rescue Planning advice for objects lost at sea. The Navy systems were used to plan searches for Soviet submarines in the Atlantic, Pacific, and Mediterranean starting in 1972. The systems operated in a sequential, Bayesian manner. A prior distribution for the target"s location and movement was produced using both objective and subjective information. Based on this distribution, the search assets available, and their detection characteristics, a near-optimal search was planned. Typically, this involved visual searches by Coast Guard aircraft and sonobuoy searches by Navy antisubmarine warfare patrol aircraft. The searches were executed, and the feedback, both detections and lack of detections, was fed into a particle filter to produce the posterior distribution of the target"s location. This distribution was used as the prior for the next iteration of planning and search.

  2. Early maritime applications of particle filtering

    NASA Astrophysics Data System (ADS)

    Richardson, Henry R.; Stone, Lawrence D.; Monach, W. Reynolds; Discenza, Joseph H.

    2004-01-01

    This paper provides a brief history of some operational particle filters that were used by the U. S. Coast Guard and U. S. Navy. Starting in 1974 the Coast Guard system provided Search and Rescue Planning advice for objects lost at sea. The Navy systems were used to plan searches for Soviet submarines in the Atlantic, Pacific, and Mediterranean starting in 1972. The systems operated in a sequential, Bayesian manner. A prior distribution for the target"s location and movement was produced using both objective and subjective information. Based on this distribution, the search assets available, and their detection characteristics, a near-optimal search was planned. Typically, this involved visual searches by Coast Guard aircraft and sonobuoy searches by Navy antisubmarine warfare patrol aircraft. The searches were executed, and the feedback, both detections and lack of detections, was fed into a particle filter to produce the posterior distribution of the target"s location. This distribution was used as the prior for the next iteration of planning and search.

  3. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    SciTech Connect

    R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar

    2002-06-30

    Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program

  4. Westinghouse advanced particle filter system

    SciTech Connect

    Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.

    1995-11-01

    Integrated Gasification Combined Cycles (IGCC), Pressurized Fluidized Bed Combustion (PFBC) and Advanced PFBC (APFB) are being developed and demonstrated for commercial power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC, PFBC and APFB in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of these advanced, solid fuel power generation cycles.

  5. Optimal rate filters for biomedical point processes.

    PubMed

    McNames, James

    2005-01-01

    Rate filters are used to estimate the mean event rate of many biomedical signals that can be modeled as point processes. Historically these filters have been designed using principles from two distinct fields. Signal processing principles are used to optimize the filter's frequency response. Kernel estimation principles are typically used to optimize the asymptotic statistical properties. This paper describes a design methodology that combines these principles from both fields to optimize the frequency response subject to constraints on the filter's order, symmetry, time-domain ripple, DC gain, and minimum impulse response. Initial results suggest that time-domain ripple and a negative impulse response are necessary to design a filter with a reasonable frequency response. This suggests that some of the common assumptions about the properties of rate filters should be reconsidered. PMID:17282132

  6. Westinghouse advanced particle filter system

    SciTech Connect

    Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.

    1994-10-01

    Integrated Gasification Combined Cycles (IGCC) and Pressurized Fluidized Bed Combustion (PFBC) are being developed and demonstrated for commercial, power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC and PFBC in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of PFBC and IGCC systems. This paper updates the assessment of the Westinghouse hot gas filter design based on ongoing testing and analysis. Results are summarized from recent computational fluid dynamics modeling of the plenum flow during back pulse, analysis of candle stressing under cleaning and process transient conditions and testing and analysis to evaluate potential flow induced candle vibration.

  7. Adaptive Mallow's optimization for weighted median filters

    NASA Astrophysics Data System (ADS)

    Rachuri, Raghu; Rao, Sathyanarayana S.

    2002-05-01

    This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.

  8. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers

    PubMed Central

    Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m−2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m−2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037

  9. Improving particle filters in rainfall-runoff models: application of the resample-move step and development of the ensemble Gaussian particle filter

    NASA Astrophysics Data System (ADS)

    Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.

    2012-12-01

    The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.

  10. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108

  11. Testing particle filters on convective scale dynamics

    NASA Astrophysics Data System (ADS)

    Haslehner, Mylene; Craig, George. C.; Janjic, Tijana

    2014-05-01

    Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical

  12. A comparison of EAKF and particle filter: towards a ensemble adjustment Kalman particle filter

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangming; Shen, Zheqi; Tang, Youmin

    2016-04-01

    Bayesian estimation theory provides a general approach for the state estimate. In this study, we first explore two Bayesian-based methods: ensemble adjustment Kalman filter (EAKF) and sequential importance resampling particle filter (SIR-PF), using a well-known nonlinear and non-Gaussian model (Lorenz '63 model). The EAKF can be regarded as a deterministic scheme of the ensemble Kalman filter (EnKF), which performs better than the classical (stochastic) EnKF in a general framework. Comparison between the SIR-PF and the EAKF reveals that the former outperforms the latter if ensemble size is very large that can avoid the filter degeneracy, and vice versa. On the basis of comparisons between the SIR-PF and the EAKF, a mixture filter, called ensemble adjustment Kalman particle filter (EAKPF), is proposed to combine their both merits. Similar to the ensemble Kalman particle filter, which combines the stochastic EnKF and SIR-PF analysis schemes with a tuning parameter, the new mixture filter essentially provides a continuous interpolation between the EAKF and SIR-PF. The same Lorenz '63 model is used as a testbed, showing that the EAKPF is able to overcome filter degeneracy while maintaining the non-Gaussian nature, and performs better than the EAKF given limited ensemble size.

  13. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  14. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    NASA Technical Reports Server (NTRS)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the

  15. Particle filter-based track before detect algorithms

    NASA Astrophysics Data System (ADS)

    Boers, Yvo; Driessen, Hans

    2003-12-01

    In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.

  16. Particle filter-based track before detect algorithms

    NASA Astrophysics Data System (ADS)

    Boers, Yvo; Driessen, Hans

    2004-01-01

    In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.

  17. Factored interval particle filtering for gait analysis.

    PubMed

    Saboune, Jamal; Rose, Cédric; Charpillet, François

    2007-01-01

    Commercial gait analysis systems rely on wearable sensors. The goal of this study is to develop a low cost marker less human motion capture tool. Our method is based on the estimation of 3d movements using video streams and the projection of a 3d human body model. Dynamic parameters only depend on human body movement constraints. No trained gait model is used which makes this approach generic. The 3d model is characterized by the angular positions of its articulations. The kinematic chain structure allows to factor the state vector representing the configuration of the model. We use a dynamic bayesian network and a modified particle filtering algorithm to estimate the most likely state configuration given an observation sequence. The modified algorithm takes advantage of the factorization of the state vector for efficiently weighting and resampling the particles. PMID:18002684

  18. GNSS data filtering optimization for ionospheric observation

    NASA Astrophysics Data System (ADS)

    D'Angelo, G.; Spogli, L.; Cesaroni, C.; Sgrigna, V.; Alfonsi, L.; Aquino, M. H. O.

    2015-12-01

    In the last years, the use of GNSS (Global Navigation Satellite Systems) data has been gradually increasing, for both scientific studies and technological applications. High-rate GNSS data, able to generate and output 50-Hz phase and amplitude samples, are commonly used to study electron density irregularities within the ionosphere. Ionospheric irregularities may cause scintillations, which are rapid and random fluctuations of the phase and the amplitude of the received GNSS signals. For scintillation analysis, usually, GNSS signals observed at an elevation angle lower than an arbitrary threshold (usually 15°, 20° or 30°) are filtered out, to remove the possible error sources due to the local environment where the receiver is deployed. Indeed, the signal scattered by the environment surrounding the receiver could mimic ionospheric scintillation, because buildings, trees, etc. might create diffusion, diffraction and reflection. Although widely adopted, the elevation angle threshold has some downsides, as it may under or overestimate the actual impact of multipath due to local environment. Certainly, an incorrect selection of the field of view spanned by the GNSS antenna may lead to the misidentification of scintillation events at low elevation angles. With the aim to tackle the non-ionospheric effects induced by multipath at ground, in this paper we introduce a filtering technique, termed SOLIDIFY (Standalone OutLiers IDentIfication Filtering analYsis technique), aiming at excluding the multipath sources of non-ionospheric origin to improve the quality of the information obtained by the GNSS signal in a given site. SOLIDIFY is a statistical filtering technique based on the signal quality parameters measured by scintillation receivers. The technique is applied and optimized on the data acquired by a scintillation receiver located at the Istituto Nazionale di Geofisica e Vulcanologia, in Rome. The results of the exercise show that, in the considered case of a noisy

  19. Constrained filter optimization for subsurface landmine detection

    NASA Astrophysics Data System (ADS)

    Torrione, Peter A.; Collins, Leslie; Clodfelter, Fred; Lulich, Dan; Patrikar, Ajay; Howard, Peter; Weaver, Richard; Rosen, Erik

    2006-05-01

    Previous large-scale blind tests of anti-tank landmine detection utilizing the NIITEK ground penetrating radar indicated the potential for very high anti-tank landmine detection probabilities at very low false alarm rates for algorithms based on adaptive background cancellation schemes. Recent data collections under more heterogeneous multi-layered road-scenarios seem to indicate that although adaptive solutions to background cancellation are effective, the adaptive solutions to background cancellation under different road conditions can differ significantly, and misapplication of these adaptive solutions can reduce landmine detection performance in terms of PD/FAR. In this work we present a framework for the constrained optimization of background-estimation filters that specifically seeks to optimize PD/FAR performance as measured by the area under the ROC curve between two FARs. We also consider the application of genetic algorithms to the problem of filter optimization for landmine detection. Results indicate robust results for both static and adaptive background cancellation schemes, and possible real-world advantages and disadvantages of static and adaptive approaches are discussed.

  20. On optimal filtering of measured Mueller matrices

    NASA Astrophysics Data System (ADS)

    Gil, José J.

    2016-07-01

    While any two-dimensional mixed state of polarization of light can be represented by a combination of a pure state and a fully random state, any Mueller matrix can be represented by a convex combination of a pure component and three additional components whose randomness is scaled in a proper and objective way. Such characteristic decomposition constitutes the appropriate framework for the characterization of the polarimetric randomness of the system represented by a given Mueller matrix, and provides criteria for the optimal filtering of noise in experimental polarimetry.

  1. Groupwise surface correspondence using particle filtering

    NASA Astrophysics Data System (ADS)

    Li, Guangxu; Kim, Hyoungseop; Tan, Joo Kooi; Ishikawa, Seiji

    2015-03-01

    To obtain an effective interpretation of organic shape using statistical shape models (SSMs), the correspondence of the landmarks through all the training samples is the most challenging part in model building. In this study, a coarse-tofine groupwise correspondence method for 3-D polygonal surfaces is proposed. We manipulate a reference model in advance. Then all the training samples are mapped to a unified spherical parameter space. According to the positions of landmarks of the reference model, the candidate regions for correspondence are chosen. Finally we refine the perceptually correct correspondences between landmarks using particle filter algorithm, where the likelihood of local surface features are introduced as the criterion. The proposed method was performed on the correspondence of 9 cases of left lung training samples. Experimental results show the proposed method is flexible and under-constrained.

  2. Optimal edge filters explain human blur detection.

    PubMed

    McIlhagga, William H; May, Keith A

    2012-01-01

    Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur. PMID:22984222

  3. Comparative evaluation of ensemble Kalman filter, particle filter and variational techniques for river discharge forecast

    NASA Astrophysics Data System (ADS)

    Hirpa, F. A.; Gebremichael, M.; LEE, H.; Hopson, T. M.

    2012-12-01

    Hydrologic data assimilation techniques provide a means to improve river discharge forecasts through updating hydrologic model states and correcting the atmospheric forcing data via optimally combining model outputs with observations. The performance of the assimilation procedure, however, depends on the data assimilation techniques used and the amount of uncertainty in the data sets. To investigate the effects of these, we comparatively evaluate three data assimilation techniques, including ensemble Kalman filter (EnKF), particle filter (PF) and variational (VAR) technique, which assimilate discharge and synthetic soil moisture data at various uncertainty levels into the Sacramento Soil Moisture accounting (SAC-SMA) model used by the National Weather Service (NWS) for river forecasting in The United States. The study basin is Greens Bayou watershed with area of 178 km2 in eastern Texas. In the presentation, we summarize the results of the comparisons, and discuss the challenges of applying each technique for hydrologic applications.

  4. Online maintaining appearance model using particle filter

    NASA Astrophysics Data System (ADS)

    Chen, Siying; Lan, Tian; Wang, Jianyu; Ni, Guoqiang

    2008-03-01

    Tracking by foreground matching heavily depends on the appearance model to establish object correspondences among frames and essentially, the appearance model should encode both the difference part between the object and background to guarantee the robustness and the stable part to ensure tracking consistency. This paper provides a solution for online maintaining appearance models by adjusting features in the model. Object appearance is co-modeled by a subset of Haar features selected from the over-complete feature dictionary which encodes the discriminative part of object appearance and the color histogram which describes the stable appearance. During the particle filtering process, feature values both from background patches and object observations are sampled efficiently by the aid of "foreground" and "background" particles respectively. Based on these sampled values, top-ranked discriminative features are added and invalid features are removed out to ensure the object being distinguishable from current background according to the evolving appearance model. The tracker based on this online appearance model maintaining technique has been tested on people and car tracking tasks and promising experimental results are obtained.

  5. A backtracking algorithm that deals with particle filter degeneracy

    NASA Astrophysics Data System (ADS)

    Baarsma, Rein; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    Particle filters are an excellent way to deal with stochastic models incorporating Bayesian data assimilation. While they are computationally demanding, the particle filter has no problem with nonlinearity and it accepts non-Gaussian observational data. In the geoscientific field it is this computational demand that creates a problem, since dynamic grid-based models are often already quite computationally demanding. As such it is of the utmost importance to keep the amount of samples in the filter as small as possible. Small sample populations often lead to filter degeneracy however, especially in models with high stochastic forcing. Filter degeneracy renders the sample population useless, as the population is no longer statistically informative. We have created an algorithm in an existing data assimilation framework that reacts to and deals with filter degeneracy based on Spiller et al. [2008]. During the Bayesian updating step of the standard particle filter, the algorithm tests the sample population for filter degeneracy. If filter degeneracy has occurred, the algorithm resets to the last time the filter did work correctly and recalculates the failed timespan of the filter with an increased sample population. The sample population is then reduced to its original size and the particle filter continues as normal. This algorithm was created in the PCRaster Python framework, an open source tool that enables spatio-temporal forward modelling in Python [Karssenberg et al., 2010] . The framework already contains several data assimilation algorithms, including a standard particle filter and a Kalman filter. The backtracking particle filter algorithm has been added to the framework, which will make it easy to implement in other research. The performance of the backtracking particle filter is tested against a standard particle filter using two models. The first is a simple nonlinear point model, and the second is a more complex geophysical model. The main testing

  6. Bayesian auxiliary particle filters for estimating neural tuning parameters.

    PubMed

    Mountney, John; Sobel, Marc; Obeid, Iyad

    2009-01-01

    A common challenge in neural engineering is to track the dynamic parameters of neural tuning functions. This work introduces the application of Bayesian auxiliary particle filters for this purpose. Based on Monte-Carlo filtering, Bayesian auxiliary particle filters use adaptive methods to model the prior densities of the state parameters being tracked. The observations used are the neural firing times, modeled here as a Poisson process, and the biological driving signal. The Bayesian auxiliary particle filter was evaluated by simultaneously tracking the three parameters of a hippocampal place cell and compared to a stochastic state point process filter. It is shown that Bayesian auxiliary particle filters are substantially more accurate and robust than alternative methods of state parameter estimation. The effects of time-averaging on parameter estimation are also evaluated. PMID:19963911

  7. Optimization of phononic filters via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Hussein, M. I.; El-Beltagy, M. A.

    2007-12-01

    A phononic crystal is commonly characterized by its dispersive frequency spectrum. With appropriate spatial distribution of the constituent material phases, spectral stop bands could be generated. Moreover, it is possible to control the number, the width, and the location of these bands within a frequency range of interest. This study aims at exploring the relationship between unit cell configuration and frequency spectrum characteristics. Focusing on 1D layered phononic crystals, and longitudinal wave propagation in the direction normal to the layering, the unit cell features of interest are the number of layers and the material phase and relative thickness of each layer. An evolutionary search for binary- and ternary-phase cell designs exhibiting a series of stop bands at predetermined frequencies is conducted. A specially formulated representation and set of genetic operators that break the symmetries in the problem are developed for this purpose. An array of optimal designs for a range of ratios in Young's modulus and density are obtained and the corresponding objective values (the degrees to which the resulting bands match the predetermined targets) are examined as a function of these ratios. It is shown that a rather complex filtering objective could be met with a high degree of success. Structures composed of the designed phononic crystals are excellent candidates for use in a wide range of applications including sound and vibration filtering.

  8. Metal finishing wastewater pressure filter optimization

    SciTech Connect

    Norford, S.W.; Diener, G.A.; Martin, H.L.

    1992-12-31

    The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.

  9. Metal finishing wastewater pressure filter optimization

    SciTech Connect

    Norford, S.W.; Diener, G.A.; Martin, H.L.

    1992-01-01

    The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.

  10. Tractable particle filters for robot fault diagnosis

    NASA Astrophysics Data System (ADS)

    Verma, Vandi

    Experience has shown that even carefully designed and tested robots may encounter anomalous situations. It is therefore important for robots to monitor their state so that anomalous situations may be detected in a timely manner. Robot fault diagnosis typically requires tracking a very large number of possible faults in complex non-linear dynamic systems with noisy sensors. Traditional methods either ignore the uncertainly or use linear approximations of nonlinear system dynamics. Such approximations are often unrealistic, and as a result faults either go undetected or become confused with non-fault conditions. Probability theory provides a natural representation for uncertainty, but an exact Bayesian solution for the diagnosis problem is intractable. Classical Monte Carlo methods, such as particle filters, suffer from substantial computational complexity. This is particularly true with the presence of rare, yet important events, such as many system faults. The thesis presents a set of complementary algorithms that provide an approach for computationally tractable fault diagnosis. These algorithms leverage probabilistic approaches to decision theory and information theory to efficiently track a large number of faults in a general dynamic system with noisy measurements. The problem of fault diagnosis is represented as hybrid (discrete/continuous) state estimation. Taking advantage of structure in the domain it dynamically concentrates computation in the regions of state space that are currently most relevant without losing track of less likely states. Experiments with a dynamic simulation of a six-wheel rocker-bogie rover show a significant improvement in performance over the classical approach.

  11. Human-Manipulator Interface Using Particle Filter

    PubMed Central

    Wang, Xueqian

    2014-01-01

    This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator. PMID:24757430

  12. Human-manipulator interface using particle filter.

    PubMed

    Du, Guanglong; Zhang, Ping; Wang, Xueqian

    2014-01-01

    This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator. PMID:24757430

  13. Blended particle filters for large-dimensional chaotic dynamical systems.

    PubMed

    Majda, Andrew J; Qi, Di; Sapsis, Themistoklis P

    2014-05-27

    A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886

  14. Penetration of Combustion Aerosol Particles Through Filters of NIOSH-Certified Filtering Facepiece Respirators (FFRs).

    PubMed

    Gao, Shuang; Kim, Jinyong; Yermakov, Michael; Elmashae, Yousef; He, Xinjian; Reponen, Tiina; Grinshpun, Sergey A

    2015-01-01

    Filtering facepiece respirators (FFRs) are commonly worn by first responders, first receivers, and other exposed groups to protect against exposure to airborne particles, including those originated by combustion. Most of these FFRs are NIOSH-certified (e.g., N95-type) based on the performance testing of their filters against charge-equilibrated aerosol challenges, e.g., NaCl. However, it has not been examined if the filtration data obtained with the NaCl-challenged FFR filters adequately represent the protection against real aerosol hazards such as combustion particles. A filter sample of N95 FFR mounted on a specially designed holder was challenged with NaCl particles and three combustion aerosols generated in a test chamber by burning wood, paper, and plastic. The concentrations upstream (Cup) and downstream (Cdown) of the filter were measured with a TSI P-Trak condensation particle counter and a Grimm Nanocheck particle spectrometer. Penetration was determined as (Cdown/Cup) ×100%. Four test conditions were chosen to represent inhalation flows of 15, 30, 55, and 85 L/min. Results showed that the penetration values of combustion particles were significantly higher than those of the "model" NaCl particles (p < 0.05), raising a concern about applicability of the N95 filters performance obtained with the NaCl aerosol challenge to protection against combustion particles. Aerosol type, inhalation flow rate and particle size were significant (p < 0.05) factors affecting the performance of the N95 FFR filter. In contrast to N95 filters, the penetration of combustion particles through R95 and P95 FFR filters (were tested in addition to N95) were not significantly higher than that obtained with NaCl particles. The findings were attributed to several effects, including the degradation of an N95 filter due to hydrophobic organic components generated into the air by combustion. Their interaction with fibers is anticipated to be similar to those involving "oily" particles

  15. Symmetric Phase-Only Filtering in Particle-Image Velocimetry

    NASA Technical Reports Server (NTRS)

    Wemet, Mark P.

    2008-01-01

    and second- image subregions are normalized by the square roots of their respective magnitudes. This scheme yields optimal performance because the amounts of normalization applied to the spatial-frequency contents of the input and filter scenes are just enough to enhance their high-spatial-frequency contents while reducing their spurious low-spatial-frequency content. As a result, in SPOF PIV processing, particle-displacement correlation peaks can readily be detected above spurious background peaks, without need for masking or background subtraction.

  16. Simultaneous Eye Tracking and Blink Detection with Interactive Particle Filters

    NASA Astrophysics Data System (ADS)

    Wu, Junwen; Trivedi, Mohan M.

    2007-12-01

    We present a system that simultaneously tracks eyes and detects eye blinks. Two interactive particle filters are used for this purpose, one for the closed eyes and the other one for the open eyes. Each particle filter is used to track the eye locations as well as the scales of the eye subjects. The set of particles that gives higher confidence is defined as the primary set and the other one is defined as the secondary set. The eye location is estimated by the primary particle filter, and whether the eye status is open or closed is also decided by the label of the primary particle filter. When a new frame comes, the secondary particle filter is reinitialized according to the estimates from the primary particle filter. We use autoregression models for describing the state transition and a classification-based model for measuring the observation. Tensor subspace analysis is used for feature extraction which is followed by a logistic regression model to give the posterior estimation. The performance is carefully evaluated from two aspects: the blink detection rate and the tracking accuracy. The blink detection rate is evaluated using videos from varying scenarios, and the tracking accuracy is given by comparing with the benchmark data obtained using the Vicon motion capturing system. The setup for obtaining benchmark data for tracking accuracy evaluation is presented and experimental results are shown. Extensive experimental evaluations validate the capability of the algorithm.

  17. Ballistic target tracking algorithm based on improved particle filtering

    NASA Astrophysics Data System (ADS)

    Ning, Xiao-lei; Chen, Zhan-qi; Li, Xiao-yang

    2015-10-01

    Tracking ballistic re-entry target is a typical nonlinear filtering problem. In order to track the ballistic re-entry target in the nonlinear and non-Gaussian complex environment, a novel chaos map particle filter (CMPF) is used to estimate the target state. CMPF has better performance in application to estimate the state and parameter of nonlinear and non-Gassuian system. The Monte Carlo simulation results show that, this method can effectively solve particle degeneracy and particle impoverishment problem by improving the efficiency of particle sampling to obtain the better particles to part in estimation. Meanwhile CMPF can improve the state estimation precision and convergence velocity compared with EKF, UKF and the ordinary particle filter.

  18. Method of concurrently filtering particles and collecting gases

    SciTech Connect

    Mitchell, Mark A; Meike, Annemarie; Anderson, Brian L

    2015-04-28

    A system for concurrently filtering particles and collecting gases. Materials are be added (e.g., via coating the ceramic substrate, use of loose powder(s), or other means) to a HEPA filter (ceramic, metal, or otherwise) to collect gases (e.g., radioactive gases such as iodine). The gases could be radioactive, hazardous, or valuable gases.

  19. Particle filter-based prognostics: Review, discussion and perspectives

    NASA Astrophysics Data System (ADS)

    Jouin, Marine; Gouriveau, Rafael; Hissel, Daniel; Péra, Marie-Cécile; Zerhouni, Noureddine

    2016-05-01

    Particle filters are of great concern in a large variety of engineering fields such as robotics, statistics or automatics. Recently, it has developed among Prognostics and Health Management (PHM) applications for diagnostics and prognostics. According to some authors, it has ever become a state-of-the-art technique for prognostics. Nowadays, around 50 papers dealing with prognostics based on particle filters can be found in the literature. However, no comprehensive review has been proposed on the subject until now. This paper aims at analyzing the way particle filters are used in that context. The development of the tool in the prognostics' field is discussed before entering the details of its practical use and implementation. Current issues are identified, analyzed and some solutions or work trails are proposed. All this aims at highlighting future perspectives as well as helping new users to start with particle filters in the goal of prognostics.

  20. Resampling Algorithms for Particle Filters: A Computational Complexity Perspective

    NASA Astrophysics Data System (ADS)

    Bolić, Miodrag; Djurić, Petar M.; Hong, Sangjin

    2004-12-01

    Newly developed resampling algorithms for particle filters suitable for real-time implementation are described and their analysis is presented. The new algorithms reduce the complexity of both hardware and DSP realization through addressing common issues such as decreasing the number of operations and memory access. Moreover, the algorithms allow for use of higher sampling frequencies by overlapping in time the resampling step with the other particle filtering steps. Since resampling is not dependent on any particular application, the analysis is appropriate for all types of particle filters that use resampling. The performance of the algorithms is evaluated on particle filters applied to bearings-only tracking and joint detection and estimation in wireless communications. We have demonstrated that the proposed algorithms reduce the complexity without performance degradation.

  1. Geomagnetic field modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Five individual 5 year mini-batch geomagnetic models were generated and two computer programs were developed to process the models. The first program computes statistics (mean sigma, weighted sigma) on the changes in the first derivatives (linear terms) of the spherical harmonic coefficients between mini-batches. The program ran successfully. The statistics are intended for use in computing the state noise matrix required in the information filter. The second program is the information filter. Most subroutines used in the filter were tested, but the coefficient statistics must be analyzed before the filter is run.

  2. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  3. Optimal filter bandwidth for pulse oximetry

    NASA Astrophysics Data System (ADS)

    Stuban, Norbert; Niwayama, Masatsugu

    2012-10-01

    Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.

  4. Optimal Gain Filter Design for Perceptual Acoustic Echo Suppressor

    NASA Astrophysics Data System (ADS)

    Kim, Kihyeon; Ko, Hanseok

    This Letter proposes an optimal gain filter for the perceptual acoustic echo suppressor. We designed an optimally-modified log-spectral amplitude estimation algorithm for the gain filter in order to achieve robust suppression of echo and noise. A new parameter including information about interferences (echo and noise) of single-talk duration is statistically analyzed, and then the speech absence probability and the a posteriori SNR are judiciously estimated to determine the optimal solution. The experiments show that the proposed gain filter attains a significantly improved reduction of echo and noise with less speech distortion.

  5. Entropy-based optimization of wavelet spatial filters.

    PubMed

    Farina, Darino; Kamavuako, Ernest Nlandu; Wu, Jian; Naddeo, Francesco

    2008-03-01

    A new class of spatial filters for surface electromyographic (EMG) signal detection is proposed. These filters are based on the 2-D spatial wavelet decomposition of the surface EMG recorded with a grid of electrodes and inverse transformation after zeroing a subset of the transformation coefficients. The filter transfer function depends on the selected mother wavelet in the two spatial directions. Wavelet parameterization is proposed with the aim of signal-based optimization of the transfer function of the spatial filter. The optimization criterion was the minimization of the entropy of the time samples of the output signal. The optimized spatial filter is linear and space invariant. In simulated and experimental recordings, the optimized wavelet filter showed increased selectivity with respect to previously proposed filters. For example, in simulation, the ratio between the peak-to-peak amplitude of action potentials generated by motor units 20 degrees apart in the transversal direction was 8.58% (with monopolar recording), 2.47% (double differential), 2.59% (normal double differential), and 0.47% (optimized wavelet filter). In experimental recordings, the duration of the detected action potentials decreased from (mean +/- SD) 6.9 +/- 0.3 ms (monopolar recording), to 4.5 +/- 0.2 ms (normal double differential), 3.7 +/- 0.2 (double differential), and 3.0 +/- 0.1 ms (optimized wavelet filter). In conclusion, the new class of spatial filters with the proposed signal-based optimization of the transfer function allows better discrimination of individual motor unit activities in surface EMG recordings than it was previously possible. PMID:18334382

  6. Forward-looking infrared 3D target tracking via combination of particle filter and SIFT

    NASA Astrophysics Data System (ADS)

    Li, Xing; Cao, Zhiguo; Yan, Ruicheng; Li, Tuo

    2013-10-01

    Aiming at the problem of tracking 3D target in forward-looking infrared (FLIR) image, this paper proposes a high-accuracy robust tracking algorithm based on SIFT and particle filter. The main contribution of this paper is the proposal of a new method of estimating the affine transformation matrix parameters based on Monte Carlo methods of particle filter. At first, we extract SIFT features on infrared image, and calculate the initial affine transformation matrix with optimal candidate key points. Then we take affine transformation parameters as particles, and use SIR (Sequential Importance Resampling) particle filter to estimate the best position, thus implementing our algorithm. The experiments demonstrate that our algorithm proves to be robust with high accuracy.

  7. PSO Algorithm Particle Filters for Improving the Performance of Lane Detection and Tracking Systems in Difficult Roads

    PubMed Central

    Cheng, Wen-Chang

    2012-01-01

    In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453

  8. Particle-filter-based phase estimation in digital holographic interferometry.

    PubMed

    Waghmare, Rahul G; Ram Sukumar, P; Subrahmanyam, G R K S; Singh, Rakesh Kumar; Mishra, Deepak

    2016-03-01

    In this paper, we propose a particle-filter-based technique for the analysis of a reconstructed interference field. The particle filter and its variants are well proven as tracking filters in non-Gaussian and nonlinear situations. We propose to apply the particle filter for direct estimation of phase and its derivatives from digital holographic interferometric fringes via a signal-tracking approach on a Taylor series expanded state model and a polar-to-Cartesian-conversion-based measurement model. Computation of sample weights through non-Gaussian likelihood forms the major contribution of the proposed particle-filter-based approach compared to the existing unscented-Kalman-filter-based approach. It is observed that the proposed approach is highly robust to noise and outperforms the state-of-the-art especially at very low signal-to-noise ratios (i.e., especially in the range of -5 to 20 dB). The proposed approach, to the best of our knowledge, is the only method available for phase estimation from severely noisy fringe patterns even when the underlying phase pattern is rapidly varying and has a larger dynamic range. Simulation results and experimental data demonstrate the fact that the proposed approach is a better choice for direct phase estimation. PMID:26974901

  9. Geomagnetic modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Estes, R. H.

    1981-01-01

    The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.

  10. Analysis of Video-Based Microscopic Particle Trajectories Using Kalman Filtering

    PubMed Central

    Wu, Pei-Hsun; Agarwal, Ashutosh; Hess, Henry; Khargonekar, Pramod P.; Tseng, Yiider

    2010-01-01

    Abstract The fidelity of the trajectories obtained from video-based particle tracking determines the success of a variety of biophysical techniques, including in situ single cell particle tracking and in vitro motility assays. However, the image acquisition process is complicated by system noise, which causes positioning error in the trajectories derived from image analysis. Here, we explore the possibility of reducing the positioning error by the application of a Kalman filter, a powerful algorithm to estimate the state of a linear dynamic system from noisy measurements. We show that the optimal Kalman filter parameters can be determined in an appropriate experimental setting, and that the Kalman filter can markedly reduce the positioning error while retaining the intrinsic fluctuations of the dynamic process. We believe the Kalman filter can potentially serve as a powerful tool to infer a trajectory of ultra-high fidelity from noisy images, revealing the details of dynamic cellular processes. PMID:20550894

  11. Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

    PubMed Central

    Troncoso Romero, David Ernesto

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

  12. Generic Hardware Architectures for Sampling and Resampling in Particle Filters

    NASA Astrophysics Data System (ADS)

    Athalye, Akshay; Bolić, Miodrag; Hong, Sangjin; Djurić, Petar M.

    2005-12-01

    Particle filtering is a statistical signal processing methodology that has recently gained popularity in solving several problems in signal processing and communications. Particle filters (PFs) have been shown to outperform traditional filters in important practical scenarios. However their computational complexity and lack of dedicated hardware for real-time processing have adversely affected their use in real-time applications. In this paper, we present generic architectures for the implementation of the most commonly used PF, namely, the sampling importance resampling filter (SIRF). These provide a generic framework for the hardware realization of the SIRF applied to any model. The proposed architectures significantly reduce the memory requirement of the filter in hardware as compared to a straightforward implementation based on the traditional algorithm. We propose two architectures each based on a different resampling mechanism. Further, modifications of these architectures for acceleration of resampling process are presented. We evaluate these schemes based on resource usage and latency. The platform used for the evaluations is the Xilinx Virtex II pro FPGA. The architectures presented here have led to the development of the first hardware (FPGA) prototype for the particle filter applied to the bearings-only tracking problem.

  13. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  14. Effects of particle size and velocity on burial depth of airborne particles in glass fiber filters

    SciTech Connect

    Higby, D.P.

    1984-11-01

    Air sampling for particulate radioactive material involves collecting airborne particles on a filter and then determining the amount of radioactivity collected per unit volume of air drawn through the filter. The amount of radioactivity collected is frequently determined by directly measuring the radiation emitted from the particles collected on the filter. Counting losses caused by the particle becoming buried in the filter matrix may cause concentrations of airborne particulate radioactive materials to be underestimated by as much as 50%. Furthermore, the dose calculation for inhaled radionuclides will also be affected. The present study was designed to evaluate the extent to which particle size and sampling velocity influence burial depth in glass-fiber filters. Aerosols of high-fired /sup 239/PuO/sub 2/ were collected at various sampling velocities on glass-fiber filters. The fraction of alpha counts lost due to burial was determined as the ratio of activity detected by direct alpha count to the quantity determined by photon spectrometry. The results show that burial of airborne particles collected on glass-fiber filters appears to be a weak function of sampling velocity and particle size. Counting losses ranged from 0 to 25%. A correction that assumes losses of 10 to 15% would ensure that the concentration of airborne alpha-emitting radionuclides would not be underestimated when glass-fiber filters are used. 32 references, 21 figures, 11 tables.

  15. Multiple states and joint objects particle filter for eye tracking

    NASA Astrophysics Data System (ADS)

    Xiong, Jin; Jiang, Zhaohui; Liu, Junwei; Feng, Huanqing

    2007-11-01

    Recent works have proven that the particle filter is a powerful tracking technique for non-linear and non-Gaussian estimation problem. This paper presents an extension algorithm based on the color-based particle filter framework, which is applicable for complex eye tracking because of two main innovations. Firstly, an employment of an extra discrete-value variable and its associated transition probability matrix (TPM) makes it feasible in tracking multiple states of the eye during blinking. Secondly, the joint-object thought used in state vector eliminates the distraction from eyes to each other. The experimental results illustrate that the proposed algorithm is efficient for eye tracking.

  16. Westinghouse hot gas particle filter system

    SciTech Connect

    Lippert, T.E.; Bruck, G.J.; Newby, R.A.; Bachovchin, D.M.; Debski, V.L.; Morehead, H.T.

    1997-12-31

    Integrated Gasification Combined Cycles (IGCC) and Pressurized Circulating Fluidized Bed Cycles (PCFB) are being developed and demonstrated for commercial power generation applications. Hot gas particulate filters (HGPF) are key components for the successful implementation of IGCC and PCFB in power generation gas turbine cycles. The objective is to develop and qualify through analysis and testing a practical HGPF system that meets the performance and operational requirements of PCFB and IGCC systems. This paper reports on the status of Westinghouse`s HGPF commercialization programs including: A quick summary of past gasification based HGPF test programs; A summary of the integrated HGPF operation at the American Electric Power, Tidd Pressurized Fluidized Bed Combustion (PFBC) Demonstration Project with approximately 6000 hours of HGPF testing completed; A summary of approximately 3200 hours of HGPF testing at the Foster Wheeler (FW) 10 MW{sub e} facility located in Karhula, Finland; A summary of over 700 hours of HGPF operation at the FW 2 MW{sub e} topping PCFB facility located in Livingston, New Jersey; A summary of the design of the HGPFs for the DOE/Southern Company Services, Power System Development Facility (PSDF) located in Wilsonville, Alabama; A summary of the design of the commercial-scale HGPF system for the Sierra Pacific, Pinon Pine IGCC Project; A review of completed testing and a summary of planned testing of Westinghouse HGPFs in Biomass IGCC applications; and A brief summary of the HGPF systems for the City of Lakeland, McIntosh Unit 4 PCFB Demonstration Project.

  17. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  18. Sequential bearings-only-tracking initiation with particle filtering method.

    PubMed

    Liu, Bin; Hao, Chengpeng

    2013-01-01

    The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation. PMID:24453865

  19. Sequential Bearings-Only-Tracking Initiation with Particle Filtering Method

    PubMed Central

    Hao, Chengpeng

    2013-01-01

    The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation. PMID:24453865

  20. Nonlinear Statistical Signal Processing: A Particle Filtering Approach

    SciTech Connect

    Candy, J

    2007-09-19

    A introduction to particle filtering is discussed starting with an overview of Bayesian inference from batch to sequential processors. Once the evolving Bayesian paradigm is established, simulation-based methods using sampling theory and Monte Carlo realizations are discussed. Here the usual limitations of nonlinear approximations and non-gaussian processes prevalent in classical nonlinear processing algorithms (e.g. Kalman filters) are no longer a restriction to perform Bayesian inference. It is shown how the underlying hidden or state variables are easily assimilated into this Bayesian construct. Importance sampling methods are then discussed and shown how they can be extended to sequential solutions implemented using Markovian state-space models as a natural evolution. With this in mind, the idea of a particle filter, which is a discrete representation of a probability distribution, is developed and shown how it can be implemented using sequential importance sampling/resampling methods. Finally, an application is briefly discussed comparing the performance of the particle filter designs with classical nonlinear filter implementations.

  1. A local particle filter for high dimensional geophysical systems

    NASA Astrophysics Data System (ADS)

    Penny, S. G.; Miyoshi, T.

    2015-12-01

    A local particle filter (LPF) is introduced that outperforms traditional ensemble Kalman filters in highly nonlinear/non-Gaussian scenarios, both in accuracy and computational cost. The standard Sampling Importance Resampling (SIR) particle filter is augmented with an observation-space localization approach, for which an independent analysis is computed locally at each gridpoint. The deterministic resampling approach of Kitagawa is adapted for application locally and combined with interpolation of the analysis weights to smooth the transition between neighboring points. Gaussian noise is applied with magnitude equal to the local analysis spread to prevent particle degeneracy while maintaining the estimate of the growing dynamical instabilities. The approach is validated against the Local Ensemble Transform Kalman Filter (LETKF) using the 40-variable Lorenz-96 model. The results show that: (1) the accuracy of LPF surpasses LETKF as the forecast length increases (thus increasing the degree of nonlinearity), (2) the cost of LPF is significantly lower than LETKF as the ensemble size increases, and (3) LPF prevents filter divergence experienced by LETKF in cases with non-Gaussian observation error distributions.

  2. Bearings-Only Tracking of Manoeuvring Targets Using Particle Filters

    NASA Astrophysics Data System (ADS)

    Arulampalam, M. Sanjeev; Ristic, B.; Gordon, N.; Mansell, T.

    2004-12-01

    We investigate the problem of bearings-only tracking of manoeuvring targets using particle filters (PFs). Three different (PFs) are proposed for this problem which is formulated as a multiple model tracking problem in a jump Markov system (JMS) framework. The proposed filters are (i) multiple model PF (MMPF), (ii) auxiliary MMPF (AUX-MMPF), and (iii) jump Markov system PF (JMS-PF). The performance of these filters is compared with that of standard interacting multiple model (IMM)-based trackers such as IMM-EKF and IMM-UKF for three separate cases: (i) single-sensor case, (ii) multisensor case, and (iii) tracking with hard constraints. A conservative CRLB applicable for this problem is also derived and compared with the RMS error performance of the filters. The results confirm the superiority of the PFs for this difficult nonlinear tracking problem.

  3. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  4. Localization using omnivision-based manifold particle filters

    NASA Astrophysics Data System (ADS)

    Wong, Adelia; Yousefhussien, Mohammed; Ptucha, Raymond

    2015-01-01

    Developing precise and low-cost spatial localization algorithms is an essential component for autonomous navigation systems. Data collection must be of sufficient detail to distinguish unique locations, yet coarse enough to enable real-time processing. Active proximity sensors such as sonar and rangefinders have been used for interior localization, but sonar sensors are generally coarse and rangefinders are generally expensive. Passive sensors such as video cameras are low cost and feature-rich, but suffer from high dimensions and excessive bandwidth. This paper presents a novel approach to indoor localization using a low cost video camera and spherical mirror. Omnidirectional captured images undergo normalization and unwarping to a canonical representation more suitable for processing. Training images along with indoor maps are fed into a semi-supervised linear extension of graph embedding manifold learning algorithm to learn a low dimensional surface which represents the interior of a building. The manifold surface descriptor is used as a semantic signature for particle filter localization. Test frames are conditioned, mapped to a low dimensional surface, and then localized via an adaptive particle filter algorithm. These particles are temporally filtered for the final localization estimate. The proposed method, termed omnivision-based manifold particle filters, reduces convergence lag and increases overall efficiency.

  5. Model Adaptation for Prognostics in a Particle Filtering Framework

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar; Goebel, Kai Frank

    2011-01-01

    One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.

  6. Fast, parallel implementation of particle filtering on the GPU architecture

    NASA Astrophysics Data System (ADS)

    Gelencsér-Horváth, Anna; Tornai, Gábor János; Horváth, András; Cserey, György

    2013-12-01

    In this paper, we introduce a modified cellular particle filter (CPF) which we mapped on a graphics processing unit (GPU) architecture. We developed this filter adaptation using a state-of-the art CPF technique. Mapping this filter realization on a highly parallel architecture entailed a shift in the logical representation of the particles. In this process, the original two-dimensional organization is reordered as a one-dimensional ring topology. We proposed a proof-of-concept measurement on two models with an NVIDIA Fermi architecture GPU. This design achieved a 411- μs kernel time per state and a 77-ms global running time for all states for 16,384 particles with a 256 neighbourhood size on a sequence of 24 states for a bearing-only tracking model. For a commonly used benchmark model at the same configuration, we achieved a 266- μs kernel time per state and a 124-ms global running time for all 100 states. Kernel time includes random number generation on the GPU with curand. These results attest to the effective and fast use of the particle filter in high-dimensional, real-time applications.

  7. A Novel Particle Swarm Optimization Algorithm for Global Optimization

    PubMed Central

    Wang, Chun-Feng; Liu, Kui

    2016-01-01

    Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387

  8. A Novel Particle Swarm Optimization Algorithm for Global Optimization.

    PubMed

    Wang, Chun-Feng; Liu, Kui

    2016-01-01

    Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387

  9. Optimal filtering methods to structural damage estimation under ground excitation.

    PubMed

    Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

    2013-01-01

    This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

  10. Optimal Recursive Digital Filters for Active Bending Stabilization

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2013-01-01

    In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

  11. Single-channel noise reduction using optimal rectangular filtering matrices.

    PubMed

    Long, Tao; Chen, Jingdong; Benesty, Jacob; Zhang, Zhenxi

    2013-02-01

    This paper studies the problem of single-channel noise reduction in the time domain and presents a block-based approach where a vector of the desired speech signal is recovered by filtering a frame of the noisy signal with a rectangular filtering matrix. With this formulation, the noise reduction problem becomes one of estimating an optimal filtering matrix. To achieve such estimation, a method is introduced to decompose a frame of the clean speech signal into two orthogonal components: One correlated and the other uncorrelated with the current desired speech vector to be estimated. Different optimization cost functions are then formulated from which non-causal optimal filtering matrices are derived. The relationships among these optimal filtering matrices are discussed. In comparison with the classical sample-based technique that uses only forward prediction, the block-based method presented in this paper exploits both the forward and backward prediction as well as the temporal interpolation and, therefore, can improve the noise reduction performance by fully taking advantage of the speech property of self correlation. There is also a side advantage of this block-based method as compared to the sample-based technique, i.e., it is computationally more efficient and, as a result, more suitable for practical implementation. PMID:23363124

  12. Multiswarm Particle Swarm Optimization with Transfer of the Best Particle

    PubMed Central

    Wei, Xiao-peng; Zhang, Jian-xia; Zhou, Dong-sheng; Zhang, Qiang

    2015-01-01

    We propose an improved algorithm, for a multiswarm particle swarm optimization with transfer of the best particle called BMPSO. In the proposed algorithm, we introduce parasitism into the standard particle swarm algorithm (PSO) in order to balance exploration and exploitation, as well as enhancing the capacity for global search to solve nonlinear optimization problems. First, the best particle guides other particles to prevent them from being trapped by local optima. We provide a detailed description of BMPSO. We also present a diversity analysis of the proposed BMPSO, which is explained based on the Sphere function. Finally, we tested the performance of the proposed algorithm with six standard test functions and an engineering problem. Compared with some other algorithms, the results showed that the proposed BMPSO performed better when applied to the test functions and the engineering problem. Furthermore, the proposed BMPSO can be applied to other nonlinear optimization problems. PMID:26345200

  13. Boosting target tracking using particle filter with flow control

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Chan, Moses W.

    2013-05-01

    Target detection and tracking with passive infrared (IR) sensors can be challenging due to significant degradation and corruption of target signature by atmospheric transmission and clutter effects. This paper summarizes our efforts in phenomenology modeling of boosting targets with IR sensors, and developing algorithms for tracking targets in the presence of background clutter. On the phenomenology modeling side, the clutter images are generated using a high fidelity end-to-end simulation testbed. It models atmospheric transmission, structured clutter and solar reflections to create realistic background images. The dynamics and intensity of a boosting target are modeled and injected onto the background scene. Pixel level images are then generated with respect to the sensor characteristics. On the tracking analysis side, a particle filter for tracking targets in a sequence of clutter images is developed. The particle filter is augmented with a mechanism to control particle flow. Specifically, velocity feedback is used to constrain and control the particles. The performance of the developed "adaptive" particle filter is verified with tracking of a boosting target in the presence of clutter and occlusion.

  14. Distributed Particle Filter for Target Tracking: With Reduced Sensor Communications.

    PubMed

    Ghirmai, Tadesse

    2016-01-01

    For efficient and accurate estimation of the location of objects, a network of sensors can be used to detect and track targets in a distributed manner. In nonlinear and/or non-Gaussian dynamic models, distributed particle filtering methods are commonly applied to develop target tracking algorithms. An important consideration in developing a distributed particle filtering algorithm in wireless sensor networks is reducing the size of data exchanged among the sensors because of power and bandwidth constraints. In this paper, we propose a distributed particle filtering algorithm with the objective of reducing the overhead data that is communicated among the sensors. In our algorithm, the sensors exchange information to collaboratively compute the global likelihood function that encompasses the contribution of the measurements towards building the global posterior density of the unknown location parameters. Each sensor, using its own measurement, computes its local likelihood function and approximates it using a Gaussian function. The sensors then propagate only the mean and the covariance of their approximated likelihood functions to other sensors, reducing the communication overhead. The global likelihood function is computed collaboratively from the parameters of the local likelihood functions using an average consensus filter or a forward-backward propagation information exchange strategy. PMID:27618057

  15. Expected likelihood for tracking in clutter with particle filters

    NASA Astrophysics Data System (ADS)

    Marrs, Alan; Maskell, Simon; Bar-Shalom, Yaakov

    2002-08-01

    The standard approach to tracking a single target in clutter, using the Kalman filter or extended Kalman filter, is to gate the measurements using the predicted measurement covariance and then to update the predicted state using probabilistic data association. When tracking with a particle filter, an analog to the predicted measurement covariance is not directly available and could only be constructed as an approximation to the current particle cloud. A common alternative is to use a form of soft gating, based upon a Student's-t likelihood, that is motivated by the concept of score functions in classical statistical hypothesis testing. In this paper, we combine the score function and probabilistic data association approaches to develop a new method for tracking in clutter using a particle filter. This is done by deriving an expected likelihood from known measurement and clutter statistics. The performance of this new approach is assessed on a series of bearings-only tracking scenarios with uncertain sensor location and non-Gaussian clutter.

  16. Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

    NASA Astrophysics Data System (ADS)

    Bruno, Marcelo G. S.; Dias, Stiven S.

    2014-12-01

    We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

  17. Random set particle filter for bearings-only multitarget tracking

    NASA Astrophysics Data System (ADS)

    Vihola, Matti

    2005-05-01

    The random set approach to multitarget tracking is a theoretically sound framework that covers joint estimation of the number of targets and the state of the targets. This paper describes a particle filter implementation of the random set multitarget filter. The contribution of this paper to the random set tracking framework is the formulation of a measurement model where each sensor report is assumed to contain at most one measurement. The implemented filter was tested in synthetic bearings-only tracking scenarios containing up to two targets in the presence of false alarms and missed measurements. The estimated target state consisted of 2D position and velocity components. The filter was capable to track the targets fairly well despite of the missing measurements and the relatively high false alarm rates. In addition, the filter showed robustness against wrong parameter values of false alarm rates. The results that were obtained during the limited tests of the filter show that the random set framework has potential for challenging tracking situations. On the other hand, the computational burden of the described implementation is quite high and increases approximately linearly with respect to the expected number of targets.

  18. Distributed soft-data-constrained multi-model particle filter.

    PubMed

    Seifzadeh, Sepideh; Khaleghi, Bahador; Karray, Fakhri

    2015-03-01

    A distributed nonlinear estimation method based on soft-data-constrained multimodel particle filtering and applicable to a number of distributed state estimation problems is proposed. This method needs only local data exchange among neighboring sensor nodes and thus provides enhanced reliability, scalability, and ease of deployment. To make the multimodel particle filtering work in a distributed manner, a Gaussian approximation of the particle cloud obtained at each sensor node and a consensus propagation-based distributed data aggregation scheme are used to dynamically reweight the particles' weights. The proposed method can recover from failure situations and is robust to noise, since it keeps the same population of particles and uses the aggregated global Gaussian to infer constraints. The constraints are enforced by adjusting particles' weights and assigning a higher mass to those closer to the global estimate represented by the nodes in the entire sensor network after each communication step. Each sensor node experiences gradual change; i.e., if a noise occurs in the system, the node, its neighbors, and consequently the overall network are less affected than with other approaches, and thus recover faster. The efficiency of the proposed method is verified through extensive simulations for a target tracking system which can process both soft and hard data in sensor networks. PMID:24956539

  19. Na-Faraday rotation filtering: The optimal point

    PubMed Central

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  20. Na-Faraday rotation filtering: the optimal point.

    PubMed

    Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja

    2014-01-01

    Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

  1. Optimization of the development process for air sampling filter standards

    NASA Astrophysics Data System (ADS)

    Mena, RaJah Marie

    Air monitoring is an important analysis technique in health physics. However, creating standards which can be used to calibrate detectors used in the analysis of the filters deployed for air monitoring can be challenging. The activity of a standard should be well understood, this includes understanding how the location within the filter affects the final surface emission rate. The purpose of this research is to determine the parameters which most affect uncertainty in an air filter standard and optimize these parameters such that calibrations made with them most accurately reflect the true activity contained inside. A deposition pattern was chosen from literature to provide the best approximation of uniform deposition of material across the filter. Samples sets were created varying the type of radionuclide, amount of activity (high activity at 6.4 -- 306 Bq/filter and one low activity 0.05 -- 6.2 Bq/filter, and filter type. For samples analyzed for gamma or beta contaminants, the standards created with this procedure were deemed sufficient. Additional work is needed to reduce errors to ensure this is a viable procedure especially for alpha contaminants.

  2. Optimal Correlation Filters for Images with Signal-Dependent Noise

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Walkup, John F.

    1994-01-01

    We address the design of optimal correlation filters for pattern detection and recognition in the presence of signal-dependent image noise sources. The particular examples considered are film-grain noise and speckle. Two basic approaches are investigated: (1) deriving the optimal matched filters for the signal-dependent noise models and comparing their performances with those derived for traditional signal-independent noise models and (2) first nonlinearly transforming the signal-dependent noise to signal-independent noise followed by the use of a classical filter matched to the transformed signal. We present both theoretical and computer simulation results that demonstrate the generally superior performance of the second approach in terms of the correlation peak signal-to-noise ratio.

  3. Optimization of narrow optical spectral filters for nonparallel monochromatic radiation.

    PubMed

    Linder, S L

    1967-07-01

    This paper delineates a method of determining the design criteria for narrow optical passband filters used in the reception of nonparallel modulated monochromatic radiation. The analysis results in straightforward mathematical expressions for calculating the filter width and design center wavelength which maximize the signal-to-noise ratio. Two cases are considered: (a) the filter is designed to have a maximum transmission (for normal incidence) at the incident wavelength, but with the spectral width optimized, and (b) both the design wavelength and the spectral width are optimized. It is shown that the voltage signal-to-noise ratio for case (b) is 2((1/2)) that of case (a). Numerical examples are calculated. PMID:20062163

  4. Composite Particle Swarm Optimizer With Historical Memory for Function Optimization.

    PubMed

    Li, Jie; Zhang, JunQi; Jiang, ChangJun; Zhou, MengChu

    2015-10-01

    Particle swarm optimization (PSO) algorithm is a population-based stochastic optimization technique. It is characterized by the collaborative search in which each particle is attracted toward the global best position (gbest) in the swarm and its own best position (pbest). However, all of particles' historical promising pbests in PSO are lost except their current pbests. In order to solve this problem, this paper proposes a novel composite PSO algorithm, called historical memory-based PSO (HMPSO), which uses an estimation of distribution algorithm to estimate and preserve the distribution information of particles' historical promising pbests. Each particle has three candidate positions, which are generated from the historical memory, particles' current pbests, and the swarm's gbest. Then the best candidate position is adopted. Experiments on 28 CEC2013 benchmark functions demonstrate the superiority of HMPSO over other algorithms. PMID:26390177

  5. Opdic (optimized Peak, Distortion and Clutter) Detection Filter.

    NASA Astrophysics Data System (ADS)

    House, Gregory Philip

    1995-01-01

    Detection is considered. This involves determining regions of interest (ROIs) in a scene: the locations of multiple object classes in a scene in clutter when object distortions and contrast differences are present. High probability of detection P_{D} is essential and low P_{FA } is desirable since subsequent stages in the full system will only decrease P_{FA } and cannot increase P_{D }. Low resolution blob objects and objects with more internal detail are considered with both 3-D aspect view and depression angle distortions present. Extensive tests were conducted on 56 scenes with object classes not present in the training set. A modified MINACE (Minimum Noise and Correlation Energy) distortion-invariant filter was used. This minimizes correlation plane energy due to distortions and clutter while satisfying correlation peak constraint values for various object-aspect views. The filter was modified with a new object model (to give predictable output peak values) and a new correlated noise clutter model; a white Gaussian noise model of distortion was used; and a new techniques to increase the number of training set images (N _{T}) included in the filter were developed. Excellent results were obtained. However, the correlation plane distortion and clutter energy functions were found to become worse as N_{T } was increased and no rigorous method exists to select the best N_{T} (when to stop filter synthesis). A new OPDIC (Optimized Peak, Distortion, and Clutter) filter was thus devised. This filter retained the new object, clutter and distortion models noted. It minimizes the variance of the correlation peak values for all training set images (not just the N_{T} images). As N _{T} increases, the peak variance and the objective functions (correlation plane distortion and clutter energy) are all minimized. Thus, this new filter optimizes the desired functions and provides an easy way to stop filter synthesis (when the objective function is minimized). Tests show

  6. Particle swarm optimization for complex nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos

    2016-06-01

    This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.

  7. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486

  8. Measurement of particle sulfate from micro-aethalometer filters

    NASA Astrophysics Data System (ADS)

    Wang, Qingqing; Yang, Fumo; Wei, Lianfang; Zheng, Guangjie; Fan, Zhongjie; Rajagopalan, Sanjay; Brook, Robert D.; Duan, Fengkui; He, Kebin; Sun, Yele; Brook, Jeffrey R.

    2014-10-01

    The micro-aethalometer (AE51) was designed for high time resolution black carbon (BC) measurements and the process collects particles on a filter inside the instrument. Here we examine the potential for saving these filters for subsequent sulfate (SO42-) measurement. For this purpose, a series lab and field blanks were analyzed to characterize blank levels and variability and then collocated 24-h aerosol sampling was conducted in Beijing with the AE51 and a dual-channel filterpack sampler that collects fine particles (PM2.5). AE51 filters and the filters from the filterpacks sampled for 24 h were extracted with ultrapure water and then analyzed by Ion Chromatography (IC) to determine integrated SO42- concentration. Blank corrections were essential and the estimated detection limit for 24 h AE51 sampling of SO42- was estimated to be 1.4 μg/m3. The SO42- measured from the AE51 based upon blank corrections using batch-average field blank SO42- values was found to be in reasonable agreement with the filterpack results (R2 > 0.87, slope = 1.02) indicating that it is possible to determine both BC and SO42- concentrations using the AE51 in Beijing. This result suggests that future comparison of the relative health impacts of BC and SO42- could be possible when the AE51 is used for personal exposure measurement.

  9. Marginalized Particle Filter for Blind Signal Detection with Analog Imperfections

    NASA Astrophysics Data System (ADS)

    Yoshida, Yuki; Hayashi, Kazunori; Sakai, Hideaki; Bocquet, Wladimir

    Recently, the marginalized particle filter (MPF) has been applied to blind symbol detection problems over selective fading channels. The MPF can ease the computational burden of the standard particle filter (PF) while offering better estimates compared with the standard PF. In this paper, we investigate the application of the blind MPF detector to more realistic situations where the systems suffer from analog imperfections which are non-linear signal distortion due to the inaccurate analog circuits in wireless devices. By reformulating the system model using the widely linear representation and employing the auxiliary variable resampling (AVR) technique for estimation of the imperfections, the blind MPF detector is successfully modified to cope with the analog imperfections. The effectiveness of the proposed MPF detector is demonstrated via computer simulations.

  10. Degeneracy, frequency response and filtering in IMRT optimization

    NASA Astrophysics Data System (ADS)

    Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D.; Promberger, Claus

    2004-07-01

    This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques.

  11. Linear multistep methods, particle filtering and sequential Monte Carlo

    NASA Astrophysics Data System (ADS)

    Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki

    2013-08-01

    Numerical integration is the main bottleneck in particle filter methodologies for dynamic inverse problems to estimate model parameters, initial values, and non-observable components of an ordinary differential equation (ODE) system from partial, noisy observations, because proposals may result in stiff systems which first slow down or paralyze the time integration process, then end up being discarded. The immediate advantage of formulating the problem in a sequential manner is that the integration is carried out on shorter intervals, thus reducing the risk of long integration processes followed by rejections. We propose to solve the ODE systems within a particle filter framework with higher order numerical integrators which can handle stiffness and to base the choice of the variance of the innovation on estimates of the discretization errors. The application of linear multistep methods to particle filters gives a handle on the stability and accuracy of the propagation, and linking the innovation variance to the accuracy estimate helps keep the variance of the estimate as low as possible. The effectiveness of the methodology is demonstrated with a simple ODE system similar to those arising in biochemical applications.

  12. Optimal color image restoration: Wiener filter and quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2015-03-01

    In this paper, we consider the model of quaternion signal degradation when the signal is convoluted and an additive noise is added. The classical model of such a model leads to the solution of the optimal Wiener filter, where the optimality with respect to the mean square error. The characteristic of this filter can be found in the frequency domain by using the Fourier transform. For quaternion signals, the inverse problem is complicated by the fact that the quaternion arithmetic is not commutative. The quaternion Fourier transform does not map the convolution to the operation of multiplication. In this paper, we analyze the linear model of the signal and image degradation with an additive independent noise and the optimal filtration of the signal and images in the frequency domain and in the quaternion space.

  13. Optimized Beam Sculpting with Generalized Fringe-rate Filters

    NASA Astrophysics Data System (ADS)

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina

    2016-03-01

    We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.

  14. Ensemble Data Assimilation for Streamflow Forecasting: Experiments with Ensemble Kalman Filter and Particle Filter

    NASA Astrophysics Data System (ADS)

    Hirpa, F. A.; Gebremichael, M.; Hopson, T. M.; Wojick, R.

    2011-12-01

    We present results of data assimilation of ground discharge observation and remotely sensed soil moisture observations into Sacramento Soil Moisture Accounting (SACSMA) model in a small watershed (1593 km2) in Minnesota, the Unites States. Specifically, we perform assimilation experiments with Ensemble Kalman Filter (EnKF) and Particle Filter (PF) in order to improve streamflow forecast accuracy at six hourly time step. The EnKF updates the soil moisture states in the SACSMA from the relative errors of the model and observations, while the PF adjust the weights of the state ensemble members based on the likelihood of the forecast. Results of the improvements of each filter over the reference model (without data assimilation) will be presented. Finally, the EnKF and PF are coupled together to further improve the streamflow forecast accuracy.

  15. Acoustic Radiation Optimization Using the Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Jeon, Jin-Young; Okuma, Masaaki

    The present paper describes a fundamental study on structural bending design to reduce noise using a new evolutionary population-based heuristic algorithm called the particle swarm optimization algorithm (PSOA). The particle swarm optimization algorithm is a parallel evolutionary computation technique proposed by Kennedy and Eberhart in 1995. This algorithm is based on the social behavior models for bird flocking, fish schooling and other models investigated by zoologists. Optimal structural design problems to reduce noise are highly nonlinear, so that most conventional methods are difficult to apply. The present paper investigates the applicability of PSOA to such problems. Optimal bending design of a vibrating plate using PSOA is performed in order to minimize noise radiation. PSOA can be effectively applied to such nonlinear acoustic radiation optimization.

  16. Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement

    NASA Astrophysics Data System (ADS)

    Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.

    In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.

  17. Optimal matched filter design for ultrasonic NDE of coarse grain materials

    NASA Astrophysics Data System (ADS)

    Li, Minghui; Hayward, Gordon

    2016-02-01

    Coarse grain materials are widely used in a variety of key industrial sectors like energy, oil and gas, and aerospace due to their attractive properties. However, when these materials are inspected using ultrasound, the flaw echoes are usually contaminated by high-level, correlated grain noise originating from the material microstructures, which is time-invariant and demonstrates similar spectral characteristics as flaw signals. As a result, the reliable inspection of such materials is highly challenging. In this paper, we present a method for reliable ultrasonic non-destructive evaluation (NDE) of coarse grain materials using matched filters, where the filter is designed to approximate and match the unknown defect echoes, and a particle swarm optimization (PSO) paradigm is employed to search for the optimal parameters in the filter response with an objective to maximise the output signal-to-noise ratio (SNR). Experiments with a 128-element 5MHz transducer array on mild steel and INCONEL Alloy 617 samples are conducted, and the results confirm that the SNR of the images is improved by about 10-20 dB if the optimized matched filter is applied to all the A-scan waveforms prior to image formation. Furthermore, the matched filter can be implemented in real-time with low extra computational cost.

  18. Selectively-informed particle swarm optimization.

    PubMed

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-01-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315

  19. Selectively-informed particle swarm optimization

    PubMed Central

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-01-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315

  20. Selectively-informed particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Du, Wenbo; Yan, Gang

    2015-03-01

    Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors.

  1. Fourier Spectral Filter Array for Optimal Multispectral Imaging.

    PubMed

    Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo

    2016-04-01

    Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data. PMID:26849867

  2. System-level optimization of baseband filters for communication applications

    NASA Astrophysics Data System (ADS)

    Delgado-Restituto, Manuel; Fernandez-Bootello, Juan F.; Rodriguez-Vazquez, Angel

    2003-04-01

    In this paper, we present a design approach for the high-level synthesis of programmable continuous-time Gm-C and active-RC filters with optimum trade-off among dynamic range, distortion products generation, area consumption and power dissipation, thus meeting the needs of more demanding baseband filter realizations. Further, the proposed technique guarantees that under all programming configurations, transconductors (in Gm-C filters) and resistors (in active-RC filters) as well as capacitors, are related by integer ratios in order to reduce the sensitivity to mismatch of the monolithic implementation. In order to solve the aforementioned trade-off, the filter must be properly scaled at each configuration. It means that filter node impedances must be conveniently altered so that the noise contribution of each node to the filter output be as low as possible, while avoiding that peak amplitudes at such nodes be so high as to drive active circuits into saturation. Additionally, in order to not degrade the distortion performance of the filter (in particular, if it is implemented using Gm-C techniques) node impedances can not be scaled independently from each other but restrictions must be imposed according to the principle of nonlinear cancellation. Altogether, the high-level synthesis can be seen as a constrained optimization problem where some of the variables, namely, the ratios among similar components, are restricted to discrete values. The proposed approach to accomplish optimum filter scaling under all programming configurations, relies on matrix methods for network representation, which allows an easy estimation of performance features such as dynamic range and power dissipation, as well as other network properties such as sensitivity to parameter variations and non-ideal effects of integrators blocks; and the use of a simulated annealing algorithm to explore the design space defined by the transfer and group delay specifications. It must be noted that such

  3. Auxiliary particle filter-model predictive control of the vacuum arc remelting process

    NASA Astrophysics Data System (ADS)

    Lopez, F.; Beaman, J.; Williamson, R.

    2016-07-01

    Solidification control is required for the suppression of segregation defects in vacuum arc remelting of superalloys. In recent years, process controllers for the VAR process have been proposed based on linear models, which are known to be inaccurate in highly-dynamic conditions, e.g. start-up, hot-top and melt rate perturbations. A novel controller is proposed using auxiliary particle filter-model predictive control based on a nonlinear stochastic model. The auxiliary particle filter approximates the probability of the state, which is fed to a model predictive controller that returns an optimal control signal. For simplicity, the estimation and control problems are solved using Sequential Monte Carlo (SMC) methods. The validity of this approach is verified for a 430 mm (17 in) diameter Alloy 718 electrode melted into a 510 mm (20 in) diameter ingot. Simulation shows a more accurate and smoother performance than the one obtained with an earlier version of the controller.

  4. Pixelated source optimization for optical lithography via particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Li, Sikun; Wang, Xiangzhao; Yan, Guanyong; Yang, Chaoxing

    2016-01-01

    Source optimization is one of the key techniques for achieving higher resolution without increasing the complexity of mask design. An efficient source optimization approach is proposed on the basis of particle swarm optimization. The pixelated sources are encoded into particles, which are evaluated by using the pattern error as the fitness function. Afterward, the optimization is implemented by updating the velocities and positions of these particles. This approach is demonstrated using three mask patterns, including a periodic array of contact holes, a vertical line/space design, and a complicated pattern. The pattern errors are reduced by 69.6%, 51.5%, and 40.3%, respectively. Compared with the source optimization approach via genetic algorithm, the proposed approach leads to faster convergence while improving the image quality at the same time. Compared with the source optimization approach via gradient descent method, the proposed approach does not need the calculation of gradients, and it has a strong adaptation to various lithographic models, fitness functions, and resist models. The robustness of the proposed approach to initial sources is also verified.

  5. Ridge filter design for a particle therapy line

    NASA Astrophysics Data System (ADS)

    Kim, Chang Hyeuk; Han, Garam; Lee, Hwa-Ryun; Kim, Hyunyong; Jang, Hong Suk; Kim, Jeong Hwan; Park, Dong Wook; Jang, Sea Duk; Hwang, Won Taek; Kim, Geun-Beom; Yang, Tae-Keun

    2014-05-01

    The beam irradiation system for particle therapy can use a passive or an active beam irradiation method. In the case of an active beam irradiation, using a ridge filter would be appropriate to generate a spread-out Bragg peak (SOBP) through a large scanning area. For this study, a ridge filter was designed as an energy modulation device for a prototype active scanning system at MC-50 in Korea Institute of Radiological And Medical Science (KIRAMS). The ridge filter was designed to create a 10 mm of SOBP for a 45-MeV proton beam. To reduce the distal penumbra and the initial dose, [DM] determined the weighting factor for Bragg Peak by applying an in-house iteration code and the Minuit Fit package of Root. A single ridge bar shape and its corresponding thickness were obtained through 21 weighting factors. Also, a ridge filter was fabricated to cover a large scanning area (300 × 300 mm2) by Polymethyl Methacrylate (PMMA). The fabricated ridge filter was tested at the prototype active beamline of MC-50. The SOBP and the incident beam distribution were obtained by using HD-810 GaF chromatic film placed at a right triangle to the PMMA block. The depth dose profile for the SOBP can be obtained precisely by using the flat field correction and measuring the 2-dimensional distribution of the incoming beam. After the flat field correction is used, the experimental results show that the SOBP region matches with design requirement well, with 0.62% uniformity.

  6. Independent motion detection with a rival penalized adaptive particle filter

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Hübner, Wolfgang; Arens, Michael

    2014-10-01

    Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic

  7. Identification of Backlash Type Hysteretic Systems Based on Particle Filter

    NASA Astrophysics Data System (ADS)

    Masuda, Tetsuya; Sugie, Toshiharu

    This paper considers the system identification problem for hysteresis systems. This problem plays an important role in achieving better control performance, because many actuators have hysteresis property. This paper proposes a method to identify linear dynamical systems having input hysteresis property of backlash type. The method is based on particle filter, which is known for its applicability to a wide class of nonlinear systems. Numerical examples are given to demonstrate the effectiveness of the proposed method in detail. Furthermore, experimental validation is performed for a DC servo motor system.

  8. Particle filter based on thermophoretic deposition from natural convection flow

    SciTech Connect

    Sasse, A.G.B.M.; Nazaroff, W.W. ); Gadgil, A.J. )

    1994-04-01

    We present an analysis of particle migration in a natural convection flow between parallel plates and within the annulus of concentric tubes. The flow channel is vertically oriented with one surface maintained at a higher temperature than the other. Particle migration is dominated by advection in the vertical direction and thermophoresis in the horizontal direction. From scale analysis it is demonstrated that particles are completely removed from air flowing through the channel if its length exceeds L[sub c] = (b[sup 4]g/24K[nu][sup 2]), where b is the width of the channel, g is the acceleration of gravity, K is a thermophoretic coefficient of order 0.5, and [nu] is the kinematic viscosity of air. Precise predictions of particle removal efficiency as a function of system parameters are obtained by numerical solution of the governing equations. Based on the model results, it appears feasible to develop a practical filter for removing smoke particles from a smoldering cigarette in an ashtray by using natural convection in combination with thermophoresis. 22 refs., 8 figs., 1 tab.

  9. Tracking low SNR targets using particle filter with flow control

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.

    2014-06-01

    In this work we study the problem of detecting and tracking challenging targets that exhibit low signal-to-noise ratios (SNR). We have developed a particle filter-based track-before-detect (TBD) algorithm for tracking such dim targets. The approach incorporates the most recent state estimates to control the particle flow accounting for target dynamics. The flow control enables accumulation of signal information over time to compensate for target motion. The performance of this approach is evaluated using a sensitivity analysis based on varying target speed and SNR values. This analysis was conducted using high-fidelity sensor and target modeling in realistic scenarios. Our results show that the proposed TBD algorithm is capable of tracking targets in cluttered images with SNR values much less than one.

  10. Loss of Fine Particle Ammonium from Denuded Nylon Filters

    SciTech Connect

    Yu, Xiao-Ying; Lee, Taehyoung; Ayres, Benjamin; Kreidenweis, Sonia M.; Malm, William C.; Collett, Jeffrey L.

    2006-08-01

    Ammonium is an important constituent of fine particulate mass in the atmosphere, but can be difficult to quantify due to possible sampling artifacts. Losses of semivolatile species such as NH4NO3 can be particularly problematic. In order to evaluate ammonium losses from aerosol particles collected on filters, a series of field experiments was conducted using denuded nylon and Teflon filters at Bondville, Illinois (February 2003), San Gorgonio, California (April 2003 and July 2004), Grand Canyon National Park, Arizona (May, 2003), Brigantine, New Jersey (November 2003), and Great Smoky Mountains National Park (NP), Tennessee (July–August 2004). Samples were collected over 24-hr periods. Losses from denuded nylon filters ranged from 10% (monthly average) in Bondville, Illinois to 28% in San Gorgonio, California in summer. Losses on individual sample days ranged from 1% to 65%. Losses tended to increase with increasing diurnal temperature and relative humidity changes and with the fraction of ambient total N(--III) (particulate NH4+ plus gaseous NH3) present as gaseous NH3. The amount of ammonium lost at most sites could be explained by the amount of NH4NO3 present in the sampled aerosol. Ammonium losses at Great Smoky Mountains NP, however, significantly exceeded the amount of NH4NO3 collected. Ammoniated organic salts are suggested as additional important contributors to observed ammonium loss at this location.

  11. Nonlinear EEG Decoding Based on a Particle Filter Model

    PubMed Central

    Hong, Jun

    2014-01-01

    While the world is stepping into the aging society, rehabilitation robots play a more and more important role in terms of both rehabilitation treatment and nursing of the patients with neurological diseases. Benefiting from the abundant contents of movement information, electroencephalography (EEG) has become a promising information source for rehabilitation robots control. Although the multiple linear regression model was used as the decoding model of EEG signals in some researches, it has been considered that it cannot reflect the nonlinear components of EEG signals. In order to overcome this shortcoming, we propose a nonlinear decoding model, the particle filter model. Two- and three-dimensional decoding experiments were performed to test the validity of this model. In decoding accuracy, the results are comparable to those of the multiple linear regression model and previous EEG studies. In addition, the particle filter model uses less training data and more frequency information than the multiple linear regression model, which shows the potential of nonlinear decoding models. Overall, the findings hold promise for the furtherance of EEG-based rehabilitation robots. PMID:24949420

  12. Comparison of EKF, pseudomeasurement, and particle filters for a bearing-only target tracking problem

    NASA Astrophysics Data System (ADS)

    Lin, Xiangdong; Kirubarajan, Thiagalingam; Bar-Shalom, Yaakov; Maskell, Simon

    2002-08-01

    In this paper we consider a nonlinear bearing-only target tracking problem using three different methods and compare their performances. The study is motivated by a ground surveillance problem where a target is tracked from an airborne sensor at an approximately known altitude using depression angle observations. Two nonlinear suboptimal estimators, namely, the extended Kalman Filter (EKF) and the pseudomeasurement tracking filter are applied in a 2-D bearing-only tracking scenario. The EKF is based on the linearization of the nonlinearities in the dynamic and/or the measurement equations. The pseudomeasurement tracking filter manipulates the original nonlinear measurement algebraically to obtain the linear-like structures measurement. Finally, the particle filter, which is a Monte Carlo integration based optimal nonlinear filter and has been presented in the literature as a better alternative to linearization via EKF, is used on the same problem. The performances of these three different techniques in terms of accuracy and computational load are presented in this paper. The results demonstrate the limitations of these algorithms on this deceptively simple tracking problem.

  13. A general sequential Monte Carlo method based optimal wavelet filter: A Bayesian approach for extracting bearing fault features

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Sun, Shilong; Tse, Peter W.

    2015-02-01

    A general sequential Monte Carlo method, particularly a general particle filter, attracts much attention in prognostics recently because it is able to on-line estimate posterior probability density functions of the state functions used in a state space model without making restrictive assumptions. In this paper, the general particle filter is introduced to optimize a wavelet filter for extracting bearing fault features. The major innovation of this paper is that a joint posterior probability density function of wavelet parameters is represented by a set of random particles with their associated weights, which is seldom reported. Once the joint posterior probability density function of wavelet parameters is derived, the approximately optimal center frequency and bandwidth can be determined and be used to perform an optimal wavelet filtering for extracting bearing fault features. Two case studies are investigated to illustrate the effectiveness of the proposed method. The results show that the proposed method provides a Bayesian approach to extract bearing fault features. Additionally, the proposed method can be generalized by using different wavelet functions and metrics and be applied more widely to any other situation in which the optimal wavelet filtering is required.

  14. The new approach for infrared target tracking based on the particle filter algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Hang; Han, Hong-xia

    2011-08-01

    Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy

  15. Filtering of windborne particles by a natural windbreak

    NASA Astrophysics Data System (ADS)

    Bouvet, Thomas; Loubet, Benjamin; Wilson, John D.; Tuzet, Andree

    2007-06-01

    New measurements of the transport and deposition of artificial heavy particles (glass beads) to a thick ‘shelterbelt’ of maize (width/height ratio W/ H ≈ 1.6) are used to test numerical simulations with a Lagrangian stochastic trajectory model driven by the flow field from a RANS (Reynolds-averaged, Navier-Stokes) wind and turbulence model. We illustrate the ambiguity inherent in applying to such a thick windbreak the pre-existing (Raupach et al. 2001; Atmos. Environ. 35, 3373-3383) ‘thin windbreak’ theory of particle filtering by vegetation, and show that the present description, while much more laborious, provides a reasonably satisfactory account of what was measured. A sizeable fraction of the particle flux entering the shelterbelt across its upstream face is lifted out of its volume by the mean updraft induced by the deceleration of the flow in the near-upstream and entry region, and these particles thereby escape deposition in the windbreak.

  16. Lagrange Interpolation Learning Particle Swarm Optimization.

    PubMed

    Kai, Zhang; Jinchun, Song; Ke, Ni; Song, Li

    2016-01-01

    In recent years, comprehensive learning particle swarm optimization (CLPSO) has attracted the attention of many scholars for using in solving multimodal problems, as it is excellent in preserving the particles' diversity and thus preventing premature convergence. However, CLPSO exhibits low solution accuracy. Aiming to address this issue, we proposed a novel algorithm called LILPSO. First, this algorithm introduced a Lagrange interpolation method to perform a local search for the global best point (gbest). Second, to gain a better exemplar, one gbest, another two particle's historical best points (pbest) are chosen to perform Lagrange interpolation, then to gain a new exemplar, which replaces the CLPSO's comparison method. The numerical experiments conducted on various functions demonstrate the superiority of this algorithm, and the two methods are proven to be efficient for accelerating the convergence without leading the particle to premature convergence. PMID:27123982

  17. Unit Commitment by Adaptive Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Saber, Ahmed Yousuf; Senjyu, Tomonobu; Miyagi, Tsukasa; Urasaki, Naomitsu; Funabashi, Toshihisa

    This paper presents an Adaptive Particle Swarm Optimization (APSO) for Unit Commitment (UC) problem. APSO reliably and accurately tracks a continuously changing solution. By analyzing the social model of standard PSO for the UC problem of variable size and load demand, adaptive criteria are applied on PSO parameters and the global best particle (knowledge) based on the diversity of fitness. In this proposed method, PSO parameters are automatically adjusted using Gaussian modification. To increase the knowledge, the global best particle is updated instead of a fixed one in each generation. To avoid the method to be frozen, idle particles are reset. The real velocity is digitized (0/1) by a logistic function for binary UC. Finally, the benchmark data and methods are used to show the effectiveness of the proposed method.

  18. Solving constrained optimization problems with hybrid particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zahara, Erwie; Hu, Chia-Hsin

    2008-11-01

    Constrained optimization problems (COPs) are very important in that they frequently appear in the real world. A COP, in which both the function and constraints may be nonlinear, consists of the optimization of a function subject to constraints. Constraint handling is one of the major concerns when solving COPs with particle swarm optimization (PSO) combined with the Nelder-Mead simplex search method (NM-PSO). This article proposes embedded constraint handling methods, which include the gradient repair method and constraint fitness priority-based ranking method, as a special operator in NM-PSO for dealing with constraints. Experiments using 13 benchmark problems are explained and the NM-PSO results are compared with the best known solutions reported in the literature. Comparison with three different meta-heuristics demonstrates that NM-PSO with the embedded constraint operator is extremely effective and efficient at locating optimal solutions.

  19. Multi-prediction particle filter for efficient parallelized implementation

    NASA Astrophysics Data System (ADS)

    Chu, Chun-Yuan; Chao, Chih-Hao; Chao, Min-An; Wu, An-Yeu Andy

    2011-12-01

    Particle filter (PF) is an emerging signal processing methodology, which can effectively deal with nonlinear and non-Gaussian signals by a sample-based approximation of the state probability density function. The particle generation of the PF is a data-independent procedure and can be implemented in parallel. However, the resampling procedure in the PF is a sequential task in natural and difficult to be parallelized. Based on the Amdahl's law, the sequential portion of a task limits the maximum speed-up of the parallelized implementation. Moreover, large particle number is usually required to obtain an accurate estimation, and the complexity of the resampling procedure is highly related to the number of particles. In this article, we propose a multi-prediction (MP) framework with two selection approaches. The proposed MP framework can reduce the required particle number for target estimation accuracy, and the sequential operation of the resampling can be reduced. Besides, the overhead of the MP framework can be easily compensated by parallel implementation. The proposed MP-PF alleviates the global sequential operation by increasing the local parallel computation. In addition, the MP-PF is very suitable for multi-core graphics processing unit (GPU) platform, which is a popular parallel processing architecture. We give prototypical implementations of the MP-PFs on multi-core GPU platform. For the classic bearing-only tracking experiments, the proposed MP-PF can be 25.1 and 15.3 times faster than the sequential importance resampling-PF with 10,000 and 20,000 particles, respectively. Hence, the proposed MP-PF can enhance the efficiency of the parallelization.

  20. Multi-strategy coevolving aging particle optimization.

    PubMed

    Iacca, Giovanni; Caraffini, Fabio; Neri, Ferrante

    2014-02-01

    We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer. PMID:24344695

  1. Achieving sub-nanometre particle mapping with energy-filtered TEM.

    PubMed

    Lozano-Perez, S; de Castro Bernal, V; Nicholls, R J

    2009-09-01

    A combination of state-of-the-art instrumentation and optimized data processing has enabled for the first time the chemical mapping of sub-nanometre particles using energy-filtered transmission electron microscopy (EFTEM). Multivariate statistical analysis (MSA) generated reconstructed datasets where the signal from particles smaller than 1 nm in diameter was successfully isolated from the original noisy background. The technique has been applied to the characterization of oxide dispersion strengthened (ODS) reduced activation FeCr alloys, due to their relevance as structural materials for future fusion reactors. Results revealed that most nanometer-sized particles had a core-shell structure, with an Yttrium-Chromium-Oxygen-rich core and a nano-scaled Chromium-Oxygen-rich shell. This segregation to the nanoparticles caused a decrease of the Chromium dissolved in the matrix, compromising the corrosion resistance of the alloy. PMID:19505762

  2. Analysis of single particle diffusion with transient binding using particle filtering.

    PubMed

    Bernstein, Jason; Fricks, John

    2016-07-21

    Diffusion with transient binding occurs in a variety of biophysical processes, including movement of transmembrane proteins, T cell adhesion, and caging in colloidal fluids. We model diffusion with transient binding as a Brownian particle undergoing Markovian switching between free diffusion when unbound and diffusion in a quadratic potential centered around a binding site when bound. Assuming the binding site is the last position of the particle in the unbound state and Gaussian observational error obscures the true position of the particle, we use particle filtering to predict when the particle is bound and to locate the binding sites. Maximum likelihood estimators of diffusion coefficients, state transition probabilities, and the spring constant in the bound state are computed with a stochastic Expectation-Maximization (EM) algorithm. PMID:27107737

  3. Quantum demolition filtering and optimal control of unstable systems.

    PubMed

    Belavkin, V P

    2012-11-28

    A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one. PMID:23091216

  4. Robust Tracking Using Particle Filter with a Hybrid Feature

    NASA Astrophysics Data System (ADS)

    Zhao, Xinyue; Satoh, Yutaka; Takauji, Hidenori; Kaneko, Shun'ichi

    This paper presents a novel method for robust object tracking in video sequences using a hybrid feature-based observation model in a particle filtering framework. An ideal observation model should have both high ability to accurately distinguish objects from the background and high reliability to identify the detected objects. Traditional features are better at solving the former problem but weak in solving the latter one. To overcome that, we adopt a robust and dynamic feature called Grayscale Arranging Pairs (GAP), which has high discriminative ability even under conditions of severe illumination variation and dynamic background elements. Together with the GAP feature, we also adopt the color histogram feature in order to take advantage of traditional features in resolving the first problem. At the same time, an efficient and simple integration method is used to combine the GAP feature with color information. Comparative experiments demonstrate that object tracking with our integrated features performs well even when objects go across complex backgrounds.

  5. Geoacoustic and source tracking using particle filtering: experimental results.

    PubMed

    Yardim, Caglar; Gerstoft, Peter; Hodgkiss, William S

    2010-07-01

    A particle filtering (PF) approach is presented for performing sequential geoacoustic inversion of a complex ocean acoustic environment using a moving acoustic source. This approach treats both the environmental parameters [e.g., water column sound speed profile (SSP), water depth, sediment and bottom parameters] at the source location and the source parameters (e.g., source depth, range and speed) as unknown random variables that evolve as the source moves. This allows real-time updating of the environment and accurate tracking of the moving source. As a sequential Monte Carlo technique that operates on nonlinear systems with non-Gaussian probability densities, the PF is an ideal algorithm to perform tracking of environmental and source parameters, and their uncertainties via the evolving posterior probability densities. The approach is demonstrated on both simulated data in a shallow water environment with a sloping bottom and experimental data collected during the SWellEx-96 experiment. PMID:20649203

  6. A geometric method for optimal design of color filter arrays.

    PubMed

    Hao, Pengwei; Li, Yan; Lin, Zhouchen; Dubois, Eric

    2011-03-01

    A color filter array (CFA) used in a digital camera is a mosaic of spectrally selective filters, which allows only one color component to be sensed at each pixel. The missing two components of each pixel have to be estimated by methods known as demosaicking. The demosaicking algorithm and the CFA design are crucial for the quality of the output images. In this paper, we present a CFA design methodology in the frequency domain. The frequency structure, which is shown to be just the symbolic DFT of the CFA pattern (one period of the CFA), is introduced to represent images sampled with any rectangular CFAs in the frequency domain. Based on the frequency structure, the CFA design involves the solution of a constrained optimization problem that aims at minimizing the demosaicking error. To decrease the number of parameters and speed up the parameter searching, the optimization problem is reformulated as the selection of geometric points on the boundary of a convex polygon or the surface of a convex polyhedron. Using our methodology, several new CFA patterns are found, which outperform the currently commercialized and published ones. Experiments demonstrate the effectiveness of our CFA design methodology and the superiority of our new CFA patterns. PMID:20858581

  7. Particle Swarm Optimization with Double Learning Patterns.

    PubMed

    Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian

    2016-01-01

    Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. PMID:26858747

  8. Particle Swarm Optimization with Double Learning Patterns

    PubMed Central

    Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian

    2016-01-01

    Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. PMID:26858747

  9. Lagrange Interpolation Learning Particle Swarm Optimization

    PubMed Central

    2016-01-01

    In recent years, comprehensive learning particle swarm optimization (CLPSO) has attracted the attention of many scholars for using in solving multimodal problems, as it is excellent in preserving the particles’ diversity and thus preventing premature convergence. However, CLPSO exhibits low solution accuracy. Aiming to address this issue, we proposed a novel algorithm called LILPSO. First, this algorithm introduced a Lagrange interpolation method to perform a local search for the global best point (gbest). Second, to gain a better exemplar, one gbest, another two particle’s historical best points (pbest) are chosen to perform Lagrange interpolation, then to gain a new exemplar, which replaces the CLPSO’s comparison method. The numerical experiments conducted on various functions demonstrate the superiority of this algorithm, and the two methods are proven to be efficient for accelerating the convergence without leading the particle to premature convergence. PMID:27123982

  10. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10. PMID:25826934

  11. PARTICLE REMOVAL AND HEAD LOSS DEVELOPMENT IN BIOLOGICAL FILTERS

    EPA Science Inventory

    The physical performance of granular media filters was studied under pre-chlorinated, backwash-chlorinated, and nonchlorinated conditions. Overall, biological filteration produced a high-quality water. Although effluent turbidities showed littleer difference between the perform...

  12. Particle filter with one-step randomly delayed measurements and unknown latency probability

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggang; Huang, Yulong; Li, Ning; Zhao, Lin

    2016-01-01

    In this paper, a new particle filter is proposed to solve the nonlinear and non-Gaussian filtering problem when measurements are randomly delayed by one sampling time and the latency probability of the delay is unknown. In the proposed method, particles and their weights are updated in Bayesian filtering framework by considering the randomly delayed measurement model, and the latency probability is identified by maximum likelihood criterion. The superior performance of the proposed particle filter as compared with existing methods and the effectiveness of the proposed identification method of latency probability are both illustrated in two numerical examples concerning univariate non-stationary growth model and bearing only tracking.

  13. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647

  14. An Introduction to Twisted Particle Filters and Parameter Estimation in Non-Linear State-Space Models

    NASA Astrophysics Data System (ADS)

    Ala-Luhtala, Juha; Whiteley, Nick; Heine, Kari; Piche, Robert

    2016-09-01

    Twisted particle filters are a class of sequential Monte Carlo methods recently introduced by Whiteley and Lee to improve the efficiency of marginal likelihood estimation in state-space models. The purpose of this article is to extend the twisted particle filtering methodology, establish accessible theoretical results which convey its rationale, and provide a demonstration of its practical performance within particle Markov chain Monte Carlo for estimating static model parameters. We derive twisted particle filters that incorporate systematic or multinomial resampling and information from historical particle states, and a transparent proof which identifies the optimal algorithm for marginal likelihood estimation. We demonstrate how to approximate the optimal algorithm for nonlinear state-space models with Gaussian noise and we apply such approximations to two examples: a range and bearing tracking problem and an indoor positioning problem with Bluetooth signal strength measurements. We demonstrate improvements over standard algorithms in terms of variance of marginal likelihood estimates and Markov chain autocorrelation for given CPU time, and improved tracking performance using estimated parameters.

  15. Particle Filtering Equalization Method for a Satellite Communication Channel

    NASA Astrophysics Data System (ADS)

    Sénécal, Stéphane; Amblard, Pierre-Olivier; Cavazzana, Laurent

    2004-12-01

    We propose the use of particle filtering techniques and Monte Carlo methods to tackle the in-line and blind equalization of a satellite communication channel. The main difficulties encountered are the nonlinear distortions caused by the amplifier stage in the satellite. Several processing methods manage to take into account these nonlinearities but they require the knowledge of a training input sequence for updating the equalizer parameters. Blind equalization methods also exist but they require a Volterra modelization of the system which is not suited for equalization purpose for the present model. The aim of the method proposed in the paper is also to blindly restore the emitted message. To reach this goal, a Bayesian point of view is adopted. Prior knowledge of the emitted symbols and of the nonlinear amplification model, as well as the information available from the received signal, is jointly used by considering the posterior distribution of the input sequence. Such a probability distribution is very difficult to study and thus motivates the implementation of Monte Carlo simulation methods. The presentation of the equalization method is cut into two parts. The first part solves the problem for a simplified model, focusing on the nonlinearities of the model. The second part deals with the complete model, using sampling approaches previously developed. The algorithms are illustrated and their performance is evaluated using bit error rate versus signal-to-noise ratio curves.

  16. Ultrafine particle removal by residential heating, ventilating, and air-conditioning filters.

    PubMed

    Stephens, B; Siegel, J A

    2013-12-01

    This work uses an in situ filter test method to measure the size-resolved removal efficiency of indoor-generated ultrafine particles (approximately 7-100 nm) for six new commercially available filters installed in a recirculating heating, ventilating, and air-conditioning (HVAC) system in an unoccupied test house. The fibrous HVAC filters were previously rated by the manufacturers according to ASHRAE Standard 52.2 and ranged from shallow (2.5 cm) fiberglass panel filters (MERV 4) to deep-bed (12.7 cm) electrostatically charged synthetic media filters (MERV 16). Measured removal efficiency ranged from 0 to 10% for most ultrafine particles (UFP) sizes with the lowest rated filters (MERV 4 and 6) to 60-80% for most UFP sizes with the highest rated filter (MERV 16). The deeper bed filters generally achieved higher removal efficiencies than the panel filters, while maintaining a low pressure drop and higher airflow rate in the operating HVAC system. Assuming constant efficiency, a modeling effort using these measured values for new filters and other inputs from real buildings shows that MERV 13-16 filters could reduce the indoor proportion of outdoor UFPs (in the absence of indoor sources) by as much as a factor of 2-3 in a typical single-family residence relative to the lowest efficiency filters, depending in part on particle size. PMID:23590456

  17. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690

  18. [Research on engine remaining useful life prediction based on oil spectrum analysis and particle filtering].

    PubMed

    Sun, Lei; Jia, Yun-xian; Cai, Li-ying; Lin, Guo-yu; Zhao, Jin-song

    2013-09-01

    The spectrometric oil analysis(SOA) is an important technique for machine state monitoring, fault diagnosis and prognosis, and SOA based remaining useful life(RUL) prediction has an advantage of finding out the optimal maintenance strategy for machine system. Because the complexity of machine system, its health state degradation process can't be simply characterized by linear model, while particle filtering(PF) possesses obvious advantages over traditional Kalman filtering for dealing nonlinear and non-Gaussian system, the PF approach was applied to state forecasting by SOA, and the RUL prediction technique based on SOA and PF algorithm is proposed. In the prediction model, according to the estimating result of system's posterior probability, its prior probability distribution is realized, and the multi-step ahead prediction model based on PF algorithm is established. Finally, the practical SOA data of some engine was analyzed and forecasted by the above method, and the forecasting result was compared with that of traditional Kalman filtering method. The result fully shows the superiority and effectivity of the PMID:24369656

  19. Distributed multi-sensor particle filter for bearings-only tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Jungen; Ji, Hongbing

    2012-02-01

    In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method is proposed for BOT. The sensors process their measurements at their sites using a hierarchical PF approach, which transforms the BOT problem from Cartesian coordinate to the logarithmic polar coordinate and separates the observable components from the unobservable components of the target. In the fusion centre, the target state can be estimated by utilising the multi-sensor optimal information fusion rule. Furthermore, the computation of a theoretical Cramer-Rao lower bound is given for the multi-sensor BOT problem. Simulation results illustrate that the proposed tracking method can provide better performances than the traditional PF method.

  20. Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Birge, Brian

    2013-01-01

    The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.

  1. Single Wall Diesel Particulate Filter (DPF) Filtration Efficiency Studies Using Laboratory Generated Particles

    SciTech Connect

    Yang, Juan; Stewart, Marc; Maupin, Gary D.; Herling, Darrell R.; Zelenyuk, Alla

    2009-04-15

    Diesel offers higher fuel efficiency, but produces higher exhaust particulate matter. Diesel particulate filters are presently the most efficient means to reduce these emissions. These filters typically trap particles in two basic modes: at the beginning of the exposure cycle the particles are captured in the filter holes, and at longer times the particles form a "cake" on which particles are trapped. Eventually the "cake" removed by oxidation and the cycle is repeated. We have investigated the properties and behavior of two commonly used filters: silicon carbide (SiC) and cordierite (DuraTrap® RC) by exposing them to nearly-spherical ammonium sulfate particles. We show that the transition from deep bed filtration to "cake" filtration can easily be identified by recording the change in pressure across the filters as a function of exposure. We investigated performance of these filters as a function of flow rate and particle size. The filters trap small and large particles more efficiently than particles that are ~80 to 200 nm in aerodynamic diameter. A comparison between the experimental data and a simulation using incompressible lattice-Boltzmann model shows very good qualitative agreement, but the model overpredicts the filter’s trapping efficiency.

  2. Effect of embedded unbiasedness on discrete-time optimal FIR filtering estimates

    NASA Astrophysics Data System (ADS)

    Zhao, Shunyi; Shmaliy, Yuriy S.; Liu, Fei; Ibarra-Manzano, Oscar; Khan, Sanowar H.

    2015-12-01

    Unbiased estimation is an efficient alternative to optimal estimation when the noise statistics are not fully known and/or the model undergoes temporary uncertainties. In this paper, we investigate the effect of embedded unbiasedness (EU) on optimal finite impulse response (OFIR) filtering estimates of linear discrete time-invariant state-space models. A new OFIR-EU filter is derived by minimizing the mean square error (MSE) subject to the unbiasedness constraint. We show that the OFIR-UE filter is equivalent to the minimum variance unbiased FIR (UFIR) filter. Unlike the OFIR filter, the OFIR-EU filter does not require the initial conditions. In terms of accuracy, the OFIR-EU filter occupies an intermediate place between the UFIR and OFIR filters. Contrary to the UFIR filter which MSE is minimized by the optimal horizon of N opt points, the MSEs in the OFIR-EU and OFIR filters diminish with N and these filters are thus full-horizon. Based upon several examples, we show that the OFIR-UE filter has higher immunity against errors in the noise statistics and better robustness against temporary model uncertainties than the OFIR and Kalman filters.

  3. Particle filtering methods for georeferencing panoramic image sequence in complex urban scenes

    NASA Astrophysics Data System (ADS)

    Ji, Shunping; Shi, Yun; Shan, Jie; Shao, Xiaowei; Shi, Zhongchao; Yuan, Xiuxiao; Yang, Peng; Wu, Wenbin; Tang, Huajun; Shibasaki, Ryosuke

    2015-07-01

    Georeferencing image sequences is critical for mobile mapping systems. Traditional methods such as bundle adjustment need adequate and well-distributed ground control points (GCP) when accurate GPS data are not available in complex urban scenes. For applications of large areas, automatic extraction of GCPs by matching vehicle-born image sequences with geo-referenced ortho-images will be a better choice than intensive GCP collection with field surveying. However, such image matching generated GCP's are highly noisy, especially in complex urban street environments due to shadows, occlusions and moving objects in the ortho images. This study presents a probabilistic solution that integrates matching and localization under one framework. First, a probabilistic and global localization model is formulated based on the Bayes' rules and Markov chain. Unlike many conventional methods, our model can accommodate non-Gaussian observation. In the next step, a particle filtering method is applied to determine this model under highly noisy GCP's. Owing to the multiple hypotheses tracking represented by diverse particles, the method can balance the strength of geometric and radiometric constraints, i.e., drifted motion models and noisy GCP's, and guarantee an approximately optimal trajectory. Carried out tests are with thousands of mobile panoramic images and aerial ortho-images. Comparing with the conventional extended Kalman filtering and a global registration method, the proposed approach can succeed even under more than 80% gross errors in GCP's and reach a good accuracy equivalent to the traditional bundle adjustment with dense and precise control.

  4. ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435

    SciTech Connect

    Stillo, Andrew; Ricketts, Craig I.

    2013-07-01

    High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacity of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1

  5. Optimal design of multichannel fiber Bragg grating filters using Pareto multi-objective optimization algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Liu, Tundong; Jiang, Hao

    2016-01-01

    A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.

  6. Optimization of the performances of correlation filters by pre-processing the input plane

    NASA Astrophysics Data System (ADS)

    Bouzidi, F.; Elbouz, M.; Alfalou, A.; Brosseau, C.; Fakhfakh, A.

    2016-01-01

    We report findings on the optimization of the performances of correlation filters. First, we propound and validate an optimization of ROC curves adapted to correlation technique. Then, analysis suggests that a pre-processing of the input plane leads to a compromise between the robustness of the adapted filter and the discrimination of the inverse filter for face recognition applications. Rewardingly, our technical results demonstrate that this method is remarkably efficient to increase the performances of a VanderLugt correlator.

  7. A comparison of particle filters and multiple-hypothesis extended Kalman filters for bearings-only tracking

    NASA Astrophysics Data System (ADS)

    Zaugg, David A.; Samuel, Alphonso A.; Waagen, Donald E.; Schmitt, Harry A.

    2004-07-01

    Bearings-only tracking is widely used in the defense arena. Its value can be exploited in systems using optical sensors and sonar, among others. Non-linearity and non-Gaussian prior statistics are among the complications of bearings-only tracking. Several filters have been used to overcome these obstacles, including particle filters and multiple hypothesis extended Kalman filters (MHEKF). Particle filters can accommodate a wide range of distributions and do not need to be linearized. Because of this they seem ideally suited for this problem. A MHEKF can only approximate the prior distribution of a bearings-only tracking scenario and needs to be linearized. However, the likelihood distribution maintained for each MHEKF hypothesis demonstrates significant memory and lends stability to the algorithm, potentially enhancing tracking convergence. Also, the MHEKF is insensitive to outliers. For the scenarios under investigation, the sensor platform is tracking a moving and a stationary target. The sensor is allowed to maneuver in an attempt to maximize tracking performance. For these scenarios, we compare and contrast the acquisition time and mean-squared tracking error performance characteristics of particle filters and MHEKF via Monte Carlo simulation.

  8. A novel permanently magnetised high gradient magnetic filter using assisted capture for fine particles

    SciTech Connect

    Watson, J.H.P.

    1995-02-01

    This paper describes the structure and properties of a novel permanently magnetised magnetic filter for fine friable radioactive material. Previously a filter was described and tested. This filter was designed so that the holes in the filter are left open as capture proceeds which means the pressure drop builds up only slowly. This filter is not suitable for friable composite particles which can be broken by mechanical forces. The structure of magnetic part of the second filter has been changed so as to strongly capture particles composed of fine particles weakly bound together which tend to break when captured. This uses a principle of assisted-capture in which coarse particles aid the capture of the fine fragments. The technique has the unfortunate consequence that the pressure drop across the filter rises faster as capture capture proceeds than the filter described previously. These filters have the following characteristics: (1) No external magnet is required. (2) No external power is required. (3) Small is size and portable. (4) Easily interchangeable. (5) Can be cleaned without demagnetising.

  9. PARTICLE TRANSPORTATION AND DEPOSITION IN HOT GAS FILTER VESSELS - A COMPUTATIONAL AND EXPERIMENTAL MODELING APPROACH

    SciTech Connect

    Goodarz Ahmadi

    2002-07-01

    In this project, a computational modeling approach for analyzing flow and ash transport and deposition in filter vessels was developed. An Eulerian-Lagrangian formulation for studying hot-gas filtration process was established. The approach uses an Eulerian analysis of gas flows in the filter vessel, and makes use of the Lagrangian trajectory analysis for the particle transport and deposition. Particular attention was given to the Siemens-Westinghouse filter vessel at Power System Development Facility in Wilsonville in Alabama. Details of hot-gas flow in this tangential flow filter vessel are evaluated. The simulation results show that the rapidly rotation flow in the spacing between the shroud and the vessel refractory acts as cyclone that leads to the removal of a large fraction of the larger particles from the gas stream. Several alternate designs for the filter vessel are considered. These include a vessel with a short shroud, a filter vessel with no shroud and a vessel with a deflector plate. The hot-gas flow and particle transport and deposition in various vessels are evaluated. The deposition patterns in various vessels are compared. It is shown that certain filter vessel designs allow for the large particles to remain suspended in the gas stream and to deposit on the filters. The presence of the larger particles in the filter cake leads to lower mechanical strength thus allowing for the back-pulse process to more easily remove the filter cake. A laboratory-scale filter vessel for testing the cold flow condition was designed and fabricated. A laser-based flow visualization technique is used and the gas flow condition in the laboratory-scale vessel was experimental studied. A computer model for the experimental vessel was also developed and the gas flow and particle transport patterns are evaluated.

  10. Single half-wavelength ultrasonic particle filter: Predictions of the transfer matrix multilayer resonator model and experimental filtration results

    NASA Astrophysics Data System (ADS)

    Hawkes, Jeremy J.; Coakley, W. Terence; Gröschl, Martin; Benes, Ewald; Armstrong, Sian; Tasker, Paul J.; Nowotny, Helmut

    2002-03-01

    The quantitative performance of a ``single half-wavelength'' acoustic resonator operated at frequencies around 3 MHz as a continuous flow microparticle filter has been investigated. Standing wave acoustic radiation pressure on suspended particles (5-μm latex) drives them towards the center of the half-wavelength separation channel. Clarified suspending phase from the region closest to the filter wall is drawn away through a downstream outlet. The filtration efficiency of the device was established from continuous turbidity measurements at the filter outlet. The frequency dependence of the acoustic energy density in the aqueous particle suspension layer of the filter system was obtained by application of the transfer matrix model [H. Nowotny and E. Benes, J. Acoust. Soc. Am. 82, 513-521 (1987)]. Both the measured clearances and the calculated energy density distributions showed a maximum at the fundamental of the piezoceramic transducer and a second, significantly larger, maximum at another system's resonance not coinciding with any of the transducer or empty chamber resonances. The calculated frequency of this principal energy density maximum was in excellent agreement with the optimal clearance frequency for the four tested channel widths. The high-resolution measurements of filter performance provide, for the first time, direct verification of the matrix model predictions of the frequency dependence of acoustic energy density in the water layer.

  11. Design of SLM-constrained MACE filters using simulated annealing optimization

    NASA Astrophysics Data System (ADS)

    Khan, Ajmal; Rajan, P. Karivaratha

    1993-10-01

    Among the available filters for pattern recognition, the MACE filter produces the sharpest peak with very small sidelobes. However, when these filters are implemented using practical spatial light modulators (SLMs), because of the constrained nature of the amplitude and phase modulation characteristics of the SLM, the implementation is no longer optimal. The resulting filter response does not produce high accuracy in the recognition of the test images. In this paper, this deterioration in response is overcome by designing constrained MACE filters such that the filter is allowed to have only those values of phase-amplitude combination that can be implemented on a specified SLM. The design is carried out using simulated annealing optimization technique. The algorithm developed and the results obtained on computer simulations of the designed filters are presented.

  12. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    SciTech Connect

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

  13. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE PAGESBeta

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  14. Pattern recognition with composite correlation filters designed with multi-objective combinatorial optimization

    NASA Astrophysics Data System (ADS)

    Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo; Awwal, Abdul

    2015-03-01

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

  15. Multiple local feature representations and their fusion based on an SVR model for iris recognition using optimized Gabor filters

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing

    2014-12-01

    Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.

  16. Particle filter for state estimation of jump Markov nonlinear system with application to multi-targets tracking

    NASA Astrophysics Data System (ADS)

    Han, Hua; Ding, Yongsheng; Hao, Kuangrong; Hu, Liangjian

    2013-07-01

    In this article, we first introduce the problem of state estimation of jump Markov nonlinear systems (JMNSs). Since the density evolution method for predictor equations satisfies Fokker-Planck-Kolmogorov equation (FPKE) in Bayes estimation, the FPKE in conjunction with Bayes' conditional density update formula can provide optimal estimation for a general continuous-discrete nonlinear filtering problem. It is well known that the analytical solution of the FPKE and Bayes' formula is extremely difficult to obtain except a few special cases. Hence, we try to design a particle filter to achieve Bayes estimation of the JMNSs. In order to test the viability of our algorithm, we apply it to multiple targets tracking in video surveillance. Before starting simulation, we introduce the 'birth' and 'death' description of targets, targets' transitional probability model, and observation probability. The experiment results show good performance of our proposed filter for multiple targets tracking.

  17. Particle filter-based data assimilation for a three-dimensional biological ocean model and satellite observations

    NASA Astrophysics Data System (ADS)

    Mattern, Jann Paul; Dowd, Michael; Fennel, Katja

    2013-05-01

    We assimilate satellite observations of surface chlorophyll into a three-dimensional biological ocean model in order to improve its state estimates using a particle filter referred to as sequential importance resampling (SIR). Particle Filters represent an alternative to other, more commonly used ensemble-based state estimation techniques like the ensemble Kalman filter (EnKF). Unlike the EnKF, Particle Filters do not require normality assumptions about the model error structure and are thus suitable for highly nonlinear applications. However, their application in oceanographic contexts is typically hampered by the high dimensionality of the model's state space. We apply SIR to a high-dimensional model with a small ensemble size (20) and modify the standard SIR procedure to avoid complications posed by the high dimensionality of the model state. Two extensions to the SIR include a simple smoother to deal with outliers in the observations, and state-augmentation which provides the SIR with parameter memory. Our goal is to test the feasibility of biological state estimation with SIR for realistic models. For this purpose we compare the SIR results to a model simulation with optimal parameters with respect to the same set of observations. By running replicates of our main experiments, we assess the robustness of our SIR implementation. We show that SIR is suitable for satellite data assimilation into biological models and that both extensions, the smoother and state-augmentation, are required for robust results and improved fit to the observations.

  18. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  19. Determination Method for Optimal Installation of Active Filters in Distribution Network with Distributed Generation

    NASA Astrophysics Data System (ADS)

    Kawasaki, Shoji; Hayashi, Yasuhiro; Matsuki, Junya; Kikuya, Hirotaka; Hojo, Masahide

    Recently, the harmonic troubles in a distribution network are worried in the background of the increase of the connection of distributed generation (DG) and the spread of the power electronics equipments. As one of the strategies, control the harmonic voltage by installing an active filter (AF) has been researched. In this paper, the authors propose a computation method to determine the optimal allocations, gains and installation number of AFs so as to minimize the maximum value of voltage total harmonic distortion (THD) for a distribution network with DGs. The developed method is based on particle swarm optimization (PSO) which is one of the nonlinear optimization methods. Especially, in this paper, the case where the harmonic voltage or the harmonic current in a distribution network is assumed by connecting many DGs through the inverters, and the authors propose a determination method of the optimal allocation and gain of AF that has the harmonic restrictive effect in the whole distribution network. Moreover, the authors propose also about a determination method of the necessary minimum installation number of AFs, by taking into consideration also about the case where the target value of harmonic suppression cannot be reached, by one set only of AF. In order to verify the validity and effectiveness of the proposed method, the numerical simulations are carried out by using an analytical model of distribution network with DGs.

  20. Bayes optimal template matching for spike sorting - combining fisher discriminant analysis with optimal filtering.

    PubMed

    Franke, Felix; Quian Quiroga, Rodrigo; Hierlemann, Andreas; Obermayer, Klaus

    2015-06-01

    Spike sorting, i.e., the separation of the firing activity of different neurons from extracellular measurements, is a crucial but often error-prone step in the analysis of neuronal responses. Usually, three different problems have to be solved: the detection of spikes in the extracellular recordings, the estimation of the number of neurons and their prototypical (template) spike waveforms, and the assignment of individual spikes to those putative neurons. If the template spike waveforms are known, template matching can be used to solve the detection and classification problem. Here, we show that for the colored Gaussian noise case the optimal template matching is given by a form of linear filtering, which can be derived via linear discriminant analysis. This provides a Bayesian interpretation for the well-known matched filter output. Moreover, with this approach it is possible to compute a spike detection threshold analytically. The method can be implemented by a linear filter bank derived from the templates, and can be used for online spike sorting of multielectrode recordings. It may also be applicable to detection and classification problems of transient signals in general. Its application significantly decreases the error rate on two publicly available spike-sorting benchmark data sets in comparison to state-of-the-art template matching procedures. Finally, we explore the possibility to resolve overlapping spikes using the template matching outputs and show that they can be resolved with high accuracy. PMID:25652689

  1. An optimal modification of a Kalman filter for time scales

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2003-01-01

    The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

  2. Application of extended Kalman particle filter for dynamic interference fringe processing

    NASA Astrophysics Data System (ADS)

    Ermolaev, Petr A.; Volynsky, Maxim A.

    2016-04-01

    The application of extended Kalman particle filter for dynamic estimation of interferometric signal parameters is considered. A detail description of the algorithm is given. Proposed algorithm allows obtaining satisfactory estimates of model interferometric signals even in the presence of erroneous information on model signal parameters. It provides twice as high calculation speed in comparison with conventional particle filter by reducing the number of vectors approximating probability density function of signal parameters distribution

  3. Toward the Application of the Implicit Particle Filter to Real Data in a Shallow Water Model of the Nearshore Ocean

    NASA Astrophysics Data System (ADS)

    Miller, R.

    2015-12-01

    Following the success of the implicit particle filter in twin experiments with a shallow water model of the nearshore environment, the planned next step is application to the intensive Sandy Duck data set, gathered at Duck, NC. Adaptation of the present system to the Sandy Duck data set will require construction and evaluation of error models for both the model and the data, as well as significant modification of the system to allow for the properties of the data set. Successful implementation of the particle filter promises to shed light on the details of the capabilities and limitations of shallow water models of the nearshore ocean relative to more detailed models. Since the shallow water model admits distinct dynamical regimes, reliable parameter estimation will be important. Previous work by other groups give cause for optimism. In this talk I will describe my progress toward implementation of the new system, including problems solved, pitfalls remaining and preliminary results

  4. Research on a Lamb Wave and Particle Filter-Based On-Line Crack Propagation Prognosis Method

    PubMed Central

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Cai, Jian; Yang, Weibo

    2016-01-01

    Prognostics and health management techniques have drawn widespread attention due to their ability to facilitate maintenance activities based on need. On-line prognosis of fatigue crack propagation can offer information for optimizing operation and maintenance strategies in real-time. This paper proposes a Lamb wave-particle filter (LW-PF)-based method for on-line prognosis of fatigue crack propagation which takes advantages of the possibility of on-line monitoring to evaluate the actual crack length and uses a particle filter to deal with the crack evolution and monitoring uncertainties. The piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The state space model relating to crack propagation is established by the data-driven and finite element methods. Fatigue experiments performed on hole-edge crack specimens have validated the advantages of the proposed method. PMID:26950130

  5. Research on a Lamb Wave and Particle Filter-Based On-Line Crack Propagation Prognosis Method.

    PubMed

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Cai, Jian; Yang, Weibo

    2016-01-01

    Prognostics and health management techniques have drawn widespread attention due to their ability to facilitate maintenance activities based on need. On-line prognosis of fatigue crack propagation can offer information for optimizing operation and maintenance strategies in real-time. This paper proposes a Lamb wave-particle filter (LW-PF)-based method for on-line prognosis of fatigue crack propagation which takes advantages of the possibility of on-line monitoring to evaluate the actual crack length and uses a particle filter to deal with the crack evolution and monitoring uncertainties. The piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The state space model relating to crack propagation is established by the data-driven and finite element methods. Fatigue experiments performed on hole-edge crack specimens have validated the advantages of the proposed method. PMID:26950130

  6. Multisensor fusion for 3D target tracking using track-before-detect particle filter

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.

    2015-05-01

    This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.

  7. PARTICLE TRANSPORT AND DEPOSITION IN THE HOT-GAS FILTER AT WILSONVILLE

    SciTech Connect

    Goodarz Ahmadi

    1999-06-24

    Particle transport and deposition in the Wilsonville hot-gas filter vessel is studied. The filter vessel contains a total of 72 filters, which are arranged in two tiers. These are modeled by six upper and one lower cylindrical effective filters. An unstructured grid of 312,797 cells generated by GAMBIT is used in the simulations. The Reynolds stress model of FLUENT{trademark} (version 5.0) code is used for evaluating the gas mean velocities and root mean-square fluctuation velocities in the vessel. The particle equation of motion includes the drag, the gravitational and the lift forces. The turbulent instantaneous fluctuation velocity is simulated by a filtered Gaussian white-noise model provided by the FLUENT code. The particle deposition patterns are evaluated, and the effect of particle size is studied. The effect of turbulent dispersion, the lift force and the gravitational force are analyzed. The results show that the deposition pattern depends on particle size. Turbulent dispersion plays an important role in transport and deposition of particles. Lift and gravitational forces affect the motion of large particles, but has no effect on small particles.

  8. Ceramem filters for removal of particles from hot gas streams

    SciTech Connect

    Bishop, B.A.; Goldsmith, R.L.

    1994-11-01

    The need for hot gas cleanup in the power, advanced coal conversion, process and incineration industries is well documented and extensive development is being undertaken to develop and demonstrate suitable filtration technologies. In general, process conditions include (a) oxidizing or reducing atmospheres, (b) temperatures to 1800{degree}F, (c) pressures to 300 psi, and (d) potentially corrosive components in the gas stream. The most developed technologies entail the use of candle or tube filters, which suffer from fragility, lack of oxidation/corrosion resistance, and high cost. The ceramic membrane filter described below offers the potential to eliminate these limitations.

  9. Optimized digital filtering techniques for radiation detection with HPGe detectors

    NASA Astrophysics Data System (ADS)

    Salathe, Marco; Kihm, Thomas

    2016-02-01

    This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.

  10. Assessing consumption of bioactive micro-particles by filter-feeding Asian carp

    USGS Publications Warehouse

    Jensen, Nathan R.; Amberg, Jon J.; Luoma, James A.; Walleser, Liza R.; Gaikowski, Mark P.

    2012-01-01

    Silver carp Hypophthalmichthys molitrix (SVC) and bighead carp H. nobilis (BHC) have impacted waters in the US since their escape. Current chemical controls for aquatic nuisance species are non-selective. Development of a bioactive micro-particle that exploits filter-feeding habits of SVC or BHC could result in a new control tool. It is not fully understood if SVC or BHC will consume bioactive micro-particles. Two discrete trials were performed to: 1) evaluate if SVC and BHC consume the candidate micro-particle formulation; 2) determine what size they consume; 3) establish methods to evaluate consumption of filter-feeders for future experiments. Both SVC and BHC were exposed to small (50-100 μm) and large (150-200 μm) micro-particles in two 24-h trials. Particles in water were counted electronically and manually (microscopy). Particles on gill rakers were counted manually and intestinal tracts inspected for the presence of micro-particles. In Trial 1, both manual and electronic count data confirmed reductions of both size particles; SVC appeared to remove more small particles than large; more BHC consumed particles; SVC had fewer overall particles in their gill rakers than BHC. In Trial 2, electronic counts confirmed reductions of both size particles; both SVC and BHC consumed particles, yet more SVC consumed micro-particles compared to BHC. Of the fish that ate micro-particles, SVC consumed more than BHC. It is recommended to use multiple metrics to assess consumption of candidate micro-particles by filter-feeders when attempting to distinguish differential particle consumption. This study has implications for developing micro-particles for species-specific delivery of bioactive controls to help fisheries, provides some methods for further experiments with bioactive micro-particles, and may also have applications in aquaculture.

  11. Pareto optimality between width of central lobe and peak sidelobe intensity in the far-field pattern of lossless phase-only filters for enhancement of transverse resolution.

    PubMed

    Mukhopadhyay, Somparna; Hazra, Lakshminarayan

    2015-11-01

    Resolution capability of an optical imaging system can be enhanced by reducing the width of the central lobe of the point spread function. Attempts to achieve the same by pupil plane filtering give rise to a concomitant increase in sidelobe intensity. The mutual exclusivity between these two objectives may be considered as a multiobjective optimization problem that does not have a unique solution; rather, a class of trade-off solutions called Pareto optimal solutions may be generated. Pareto fronts in the synthesis of lossless phase-only pupil plane filters to achieve superresolution with prespecified lower limits for the Strehl ratio are explored by using the particle swarm optimization technique. PMID:26560575

  12. Parallel field programmable gate array particle filtering architecture for real-time neural signal processing.

    PubMed

    Mountney, John; Silage, Dennis; Obeid, Iyad

    2010-01-01

    Both linear and nonlinear estimation algorithms have been successfully applied as neural decoding techniques in brain machine interfaces. Nonlinear approaches such as Bayesian auxiliary particle filters offer improved estimates over other methodologies seemingly at the expense of computational complexity. Real-time implementation of particle filtering algorithms for neural signal processing may become prohibitive when the number of neurons in the observed ensemble becomes large. By implementing a parallel hardware architecture, filter performance can be improved in terms of throughput over conventional sequential processing. Such an architecture is presented here and its FPGA resource utilization is reported. PMID:21096196

  13. Particle size for greatest penetration of HEPA filters - and their true efficiency

    SciTech Connect

    da Roza, R.A.

    1982-12-01

    The particle size that most greatly penetrates a filter is a function of filter media construction, aerosol density, and air velocity. In this paper the published results of several experiments are compared with a modern filtration theory that predicts single-fiber efficiency and the particle size of maximum penetration. For high-efficiency particulate air (HEPA) filters used under design conditions this size is calculated to be 0.21 ..mu..m diam. This is in good agreement with the experimental data. The penetration at 0.21 ..mu..m is calculated to be seven times greater than at the 0.3 ..mu..m used for testing HEPA filters. Several mechanisms by which filters may have a lower efficiency in use than when tested are discussed.

  14. Optimization of FIR Digital Filters Using a Real Parameter Parallel Genetic Algorithm and Implementations.

    NASA Astrophysics Data System (ADS)

    Xu, Dexiang

    This dissertation presents a novel method of designing finite word length Finite Impulse Response (FIR) digital filters using a Real Parameter Parallel Genetic Algorithm (RPPGA). This algorithm is derived from basic Genetic Algorithms which are inspired by natural genetics principles. Both experimental results and theoretical studies in this work reveal that the RPPGA is a suitable method for determining the optimal or near optimal discrete coefficients of finite word length FIR digital filters. Performance of RPPGA is evaluated by comparing specifications of filters designed by other methods with filters designed by RPPGA. The parallel and spatial structures of the algorithm result in faster and more robust optimization than basic genetic algorithms. A filter designed by RPPGA is implemented in hardware to attenuate high frequency noise in a data acquisition system for collecting seismic signals. These studies may lead to more applications of the Real Parameter Parallel Genetic Algorithms in Electrical Engineering.

  15. Reduced Complexity HMM Filtering With Stochastic Dominance Bounds: A Convex Optimization Approach

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Vikram; Rojas, Cristian R.

    2014-12-01

    This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds.

  16. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  17. NASAL FILTERING OF FINE PARTICLES IN CHILDREN VS. ADULTS

    EPA Science Inventory

    Nasal efficiency for removing fine particles may be affected by developmental changes in nasal structure associated with age. In healthy Caucasian children (age 6-13, n=17) and adults (age 18-28, n=11) we measured the fractional deposition (DF) of fine particles (1 and 2um MMAD)...

  18. Damage Detection in Flexible Plates through Reduced-Order Modeling and Hybrid Particle-Kalman Filtering.

    PubMed

    Capellari, Giovanni; Azam, Saeed Eftekhar; Mariani, Stefano

    2015-01-01

    Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated. PMID:26703615

  19. Damage Detection in Flexible Plates through Reduced-Order Modeling and Hybrid Particle-Kalman Filtering

    PubMed Central

    Capellari, Giovanni; Eftekhar Azam, Saeed; Mariani, Stefano

    2015-01-01

    Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated. PMID:26703615

  20. Object detection and tracking with active camera on motion vectors of feature points and particle filter.

    PubMed

    Chen, Yong; Zhang, Rong-Hua; Shang, Lei; Hu, Eric

    2013-06-01

    A method based on motion vectors of feature points and particle filter has been proposed and developed for an active∕moving camera for object detection and tracking purposes. The object is detected by histogram of motion vectors first, and then, on the basis of particle filter algorithm, the weighing factors are obtained via color information. In addition, re-sampling strategy and surf feature points are used to remedy the drawback of particle degeneration. Experimental results demonstrate the practicability and accuracy of the new method and are presented in the paper. PMID:23822380

  1. Method for optimizing output in ultrashort-pulse multipass laser amplifiers with selective use of a spectral filter

    DOEpatents

    Backus, Sterling J.; Kapteyn, Henry C.

    2007-07-10

    A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

  2. Optimized filtering of regional and teleseismic seismograms: results of maximizing SNR measurements from the wavelet transform and filter banks

    SciTech Connect

    Leach, R.R.; Schultz, C.; Dowla, F.

    1997-07-15

    Development of a worldwide network to monitor seismic activity requires deployment of seismic sensors in areas which have not been well studied or may have from available recordings. Development and testing of detection and discrimination algorithms requires a robust representative set of calibrated seismic events for a given region. Utilizing events with poor signal-to-noise (SNR) can add significant numbers to usable data sets, but these events must first be adequately filtered. Source and path effects can make this a difficult task as filtering demands are highly varied as a function of distance, event magnitude, bearing, depth etc. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. In addition, filter parameters are often overly generalized or contain complicated switching. We have developed a method to provide an optimized filter for any regional or teleseismically recorded event. Recorded seismic signals contain arrival energy which is localized in frequency and time. Localized temporal signals whose frequency content is different from the frequency content of the pre-arrival record are identified using rms power measurements. The method is based on the decomposition of a time series into a set of time series signals or scales. Each scale represents a time-frequency band with a constant Q. SNR is calculated for a pre-event noise window and for a window estimated to contain the arrival. Scales with high SNR are used to indicate the band pass limits for the optimized filter.The results offer a significant improvement in SNR particularly for low SNR events. Our method provides a straightforward, optimized filter which can be immediately applied to unknown regions as knowledge of the geophysical characteristics is not required. The filtered signals can be used to map the seismic frequency response of a region and may provide improvements in travel-time picking, bearing estimation

  3. A genetic resampling particle filter for freeway traffic-state estimation

    NASA Astrophysics Data System (ADS)

    Bi, Jun; Guan, Wei; Qi, Long-Tao

    2012-06-01

    On-line estimation of the state of traffic based on data sampled by electronic detectors is important for intelligent traffic management and control. Because a nonlinear feature exists in the traffic state, and because particle filters have good characteristics when it comes to solving the nonlinear problem, a genetic resampling particle filter is proposed to estimate the state of freeway traffic. In this paper, a freeway section of the northern third ring road in the city of Beijing in China is considered as the experimental object. By analysing the traffic-state characteristics of the freeway, the traffic is modeled based on the second-order validated macroscopic traffic flow model. In order to solve the particle degeneration issue in the performance of the particle filter, a genetic mechanism is introduced into the resampling process. The realization of a genetic particle filter for freeway traffic-state estimation is discussed in detail, and the filter estimation performance is validated and evaluated by the achieved experimental data.

  4. An optimal numerical filter for wide-field-of-view measurements of earth-emitted radiation

    NASA Technical Reports Server (NTRS)

    Smith, G. L.; House, F. B.

    1981-01-01

    A technique is described in which all data points along an arc of the orbit may be used in an optimal numerical filter for wide-field-of-view measurements of earth emitted radiation. The statistical filter design is derived whereby the filter is required to give a minimum variance estimate of the radiative exitance at discrete points along the ground track of the satellite. An equation for the optimal numerical filter is given by minimizing the estimate error variance equation with respect to the filter weights, resulting in a discrete form of the Wiener-Hopf equation. Finally, variances of the errors in the radiant exitance can be computed along the ground track and in the cross track directions.

  5. Chaotic particle swarm optimization with mutation for classification.

    PubMed

    Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza

    2015-01-01

    In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937

  6. Chaotic Particle Swarm Optimization with Mutation for Classification

    PubMed Central

    Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza

    2015-01-01

    In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937

  7. Filter performance of n99 and n95 facepiece respirators against viruses and ultrafine particles.

    PubMed

    Eninger, Robert M; Honda, Takeshi; Adhikari, Atin; Heinonen-Tanski, Helvi; Reponen, Tiina; Grinshpun, Sergey A

    2008-07-01

    The performance of three filtering facepiece respirators (two models of N99 and one N95) challenged with an inert aerosol (NaCl) and three virus aerosols (enterobacteriophages MS2 and T4 and Bacillus subtilis phage)-all with significant ultrafine components-was examined using a manikin-based protocol with respirators sealed on manikins. Three inhalation flow rates, 30, 85, and 150 l min(-1), were tested. The filter penetration and the quality factor were determined. Between-respirator and within-respirator comparisons of penetration values were performed. At the most penetrating particle size (MPPS), >3% of MS2 virions penetrated through filters of both N99 models at an inhalation flow rate of 85 l min(-1). Inhalation airflow had a significant effect upon particle penetration through the tested respirator filters. The filter quality factor was found suitable for making relative performance comparisons. The MPPS for challenge aerosols was <0.1 mum in electrical mobility diameter for all tested respirators. Mean particle penetration (by count) was significantly increased when the size fraction of <0.1 mum was included as compared to particles >0.1 mum. The filtration performance of the N95 respirator approached that of the two models of N99 over the range of particle sizes tested ( approximately 0.02 to 0.5 mum). Filter penetration of the tested biological aerosols did not exceed that of inert NaCl aerosol. The results suggest that inert NaCl aerosols may generally be appropriate for modeling filter penetration of similarly sized virions. PMID:18477653

  8. Kalman filter with a linear state model for PDR+WLAN positioning and its application to assisting a particle filter

    NASA Astrophysics Data System (ADS)

    Raitoharju, Matti; Nurminen, Henri; Piché, Robert

    2015-12-01

    Indoor positioning based on wireless local area network (WLAN) signals is often enhanced using pedestrian dead reckoning (PDR) based on an inertial measurement unit. The state evolution model in PDR is usually nonlinear. We present a new linear state evolution model for PDR. In simulated-data and real-data tests of tightly coupled WLAN-PDR positioning, the positioning accuracy with this linear model is better than with the traditional models when the initial heading is not known, which is a common situation. The proposed method is computationally light and is also suitable for smoothing. Furthermore, we present modifications to WLAN positioning based on Gaussian coverage areas and show how a Kalman filter using the proposed model can be used for integrity monitoring and (re)initialization of a particle filter.

  9. Particle-Filter-Based Multisensor Fusion For Solving Low-Frequency Electromagnetic NDE Inverse Problems

    SciTech Connect

    Khan, T.; Ramuhalli, Pradeep; Dass, Sarat

    2011-06-30

    Flaw profile characterization from NDE measurements is a typical inverse problem. A novel transformation of this inverse problem into a tracking problem, and subsequent application of a sequential Monte Carlo method called particle filtering, has been proposed by the authors in an earlier publication [1]. In this study, the problem of flaw characterization from multi-sensor data is considered. The NDE inverse problem is posed as a statistical inverse problem and particle filtering is modified to handle data from multiple measurement modes. The measurement modes are assumed to be independent of each other with principal component analysis (PCA) used to legitimize the assumption of independence. The proposed particle filter based data fusion algorithm is applied to experimental NDE data to investigate its feasibility.

  10. Simplifying Physical Realization of Gaussian Particle Filters with Block-Level Pipeline Control

    NASA Astrophysics Data System (ADS)

    Hong, Sangjin; Djurić, Petar M.; Bolić, Miodrag

    2005-12-01

    We present an efficient physical realization method of particle filters for real-time tracking applications. The methodology is based on block-level pipelining where data transfer between processing blocks is effectively controlled by autonomous distributed controllers. Block-level pipelining maintains inherent operational concurrency within the algorithm for high-throughput execution. The proposed use of controllers, via parameters reconfiguration, greatly simplifies the overall controller structure, and alleviates potential speed bottlenecks that may arise due to complexity of the controller. A Gaussian particle filter for bearings-only tracking problem is realized based on the presented methodology. For demonstration, individual coarse grain processing blocks comprising particle filters are synthesized using commercial FPGA. From the execution characteristics obtained from the implementation, the overall controller structure is derived according to the methodology and its temporal correctness verified using Verilog and SystemC.

  11. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-01-01

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291

  12. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model

    PubMed Central

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-01-01

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291

  13. A particle-filtering approach to convoy tracking in the midst of civilian traffic

    NASA Astrophysics Data System (ADS)

    Pollard, Evangeline; Pannetier, Benjamin; Rombaut, Michèle

    2008-04-01

    In the battlefield surveillance domain, ground target tracking is used to evaluate the threat. Data used for tracking is given by a Ground Moving Target Indicator (GMTI) sensor which only detects moving targets. Multiple target tracking has been widely studied but most of the algorithms have weaknesses when targets are close together, as they are in a convoy. In this work, we propose a filtering approach for convoys in the midst of civilian traffic. Inspired by particle filtering, our specific algorithm cannot be applied to all the targets because of its complexity. That is why well discriminated targets are tracked using an Interacting Multiple Model-Multiple Hypothesis Tracking (IMM-MHT), whereas the convoy targets are tracked with a specific particle filter. We make the assumption that the convoy is detected (position and number of targets). Our approach is based on an Independent Partition Particle Filter (IPPF) incorporating constraint-regions. The originality of our approach is to consider a velocity constraint (all the vehicles belonging to the convoy have the same velocity) and a group constraint. Consequently, the multitarget state vector contains all the positions of the individual targets and a single convoy velocity vector. When another target is detected crossing or overtaking the convoy, a specific algorithm is used and the non-cooperative target is tracked down an adapted particle filter. As demonstrated by our simulations, a high increase in convoy tracking performance is obtained with our approach.

  14. Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications

    PubMed Central

    Moccia, Antonio

    2014-01-01

    Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154

  15. Evaluation of filter media for particle number, surface area and mass penetrations.

    PubMed

    Li, Lin; Zuo, Zhili; Japuntich, Daniel A; Pui, David Y H

    2012-07-01

    The National Institute for Occupational Safety and Health (NIOSH) developed a standard for respirator certification under 42 CFR Part 84, using a TSI 8130 automated filter tester with photometers. A recent study showed that photometric detection methods may not be sensitive for measuring engineered nanoparticles. Present NIOSH standards for penetration measurement are mass-based; however, the threshold limit value/permissible exposure limit for an engineered nanoparticle worker exposure is not yet clear. There is lack of standardized filter test development for engineered nanoparticles, and development of a simple nanoparticle filter test is indicated. To better understand the filter performance against engineered nanoparticles and correlations among different tests, initial penetration levels of one fiberglass and two electret filter media were measured using a series of polydisperse and monodisperse aerosol test methods at two different laboratories (University of Minnesota Particle Technology Laboratory and 3M Company). Monodisperse aerosol penetrations were measured by a TSI 8160 using NaCl particles from 20 to 300 nm. Particle penetration curves and overall penetrations were measured by scanning mobility particle sizer (SMPS), condensation particle counter (CPC), nanoparticle surface area monitor (NSAM), and TSI 8130 at two face velocities and three layer thicknesses. Results showed that reproducible, comparable filtration data were achieved between two laboratories, with proper control of test conditions and calibration procedures. For particle penetration curves, the experimental results of monodisperse testing agreed well with polydisperse SMPS measurements. The most penetrating particle sizes (MPPSs) of electret and fiberglass filter media were ~50 and 160 nm, respectively. For overall penetrations, the CPC and NSAM results of polydisperse aerosols were close to the penetration at the corresponding median particle sizes. For each filter type, power

  16. Isolated particle swarm optimization with particle migration and global best adoption

    NASA Astrophysics Data System (ADS)

    Tsai, Hsing-Chih; Tyan, Yaw-Yauan; Wu, Yun-Wu; Lin, Yong-Huang

    2012-12-01

    Isolated particle swarm optimization (IPSO) segregates particles into several sub-swarms in order to improve the ability of the global optimization. In this study, particle migration and global best adoption (gbest adoption) are used to improve IPSO. Particle migration allows particles to travel among sub-swarms, based on the fitness of the sub-swarms. The use of gbest adoption allows sub-swarms to peep at the gbest proportionally or probably after a certain number of iterations, i.e. gbest replacing, and gbest sharing, respectively. Three well-known benchmark functions are utilized to determine the parameter settings of the IPSO. Then, 13 benchmark functions are used to study the performance of the designed IPSO. Computational experience demonstrates that the designed IPSO is superior to the original version of particle swarm optimization (PSO) in terms of the accuracy and stability of the results, when isolation phenomenon, particle migration and gbest sharing are involved.

  17. Linear adaptive noise-reduction filters for tomographic imaging: Optimizing for minimum mean square error

    SciTech Connect

    Sun, W Y

    1993-04-01

    This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.

  18. Hybrid three-dimensional variation and particle filtering for nonlinear systems

    NASA Astrophysics Data System (ADS)

    Leng, Hong-Ze; Song, Jun-Qiang

    2013-03-01

    This work addresses the problem of estimating the states of nonlinear dynamic systems with sparse observations. We present a hybrid three-dimensional variation (3DVar) and particle piltering (PF) method, which combines the advantages of 3DVar and particle-based filters. By minimizing the cost function, this approach will produce a better proposal distribution of the state. Afterwards the stochastic resampling step in standard PF can be avoided through a deterministic scheme. The simulation results show that the performance of the new method is superior to the traditional ensemble Kalman filtering (EnKF) and the standard PF, especially in highly nonlinear systems.

  19. Optimally designed narrowband guided-mode resonance reflectance filters for mid-infrared spectroscopy

    PubMed Central

    Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.

    2011-01-01

    An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (≤ 32 cm−1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm−1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

  20. Design of an optimal-weighted MACE filter realizable with arbitrary SLM constraints

    NASA Astrophysics Data System (ADS)

    Ge, Jin; Rajan, P. Karivaratha

    1996-03-01

    A realizable optimal weighted minimum average correlation energy (MACE) filter with arbitrary spatial light modulator (SLM) constraints is presented. The MACE filter can be considered as the cascade of two separate stages. The first stage is the prewhitener which essentially converts colored noise to white noise. The second stage is the conventional synthetic discriminant function (SDF) which is optimal for white noise, but which uses training vectors subjected to the prewhitening transformation. So the energy spectrum matrix is very important for filter design. New weight function we introduce is used to adjust the correlation energy to improve the performance of MACE filter on current SLMs. The action of the weight function is to emphasize the importance of the signal energy at some frequencies and reduce the importance of signal energy at some other frequencies so as to improve correlation plane structure. The choice of weight function which is used to enhance the noise tolerance and reduce sidelobes is related to a priori pattern recognition knowledge. An algorithm which combines an iterative optimal technique with Juday's minimum Euclidean distance (MED) method is developed for the design of the realizable optimal weighted MACE filter. The performance of the designed filter is evaluated with numerical experiments.

  1. Roundness error assessment based on particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhao, J. W.; Chen, G. Q.

    2005-01-01

    Roundness error assessment is always a nonlinear optimization problem without constraints. The method of particle swarm optimization (PSO) is proposed to evaluate the roundness error. PSO is an evolution algorithm derived from the behavior of preying birds. PSO regards each feasible solution as a particle (point in n-dimensional space). It initializes a swarm of random particles in the feasible region. All particles always trace two particles in which one is the best position itself; another is the best position of all particles. According to the inertia weight and two best particles, all particles update their positions and velocities according to the fitness function. After iterations, it converges to an optimized solution. The reciprocal of the error assessment objective function is adopted as the fitness. In this paper the calculating procedures with PSO are given. Finally, an assessment example is used to verify this method. The results show that the method proposed provides a new way for other form and position error assessment because it can always converge to the global optimal solution.

  2. Automatized Parameterization of DFTB Using Particle Swarm Optimization.

    PubMed

    Chou, Chien-Pin; Nishimura, Yoshifumi; Fan, Chin-Chai; Mazur, Grzegorz; Irle, Stephan; Witek, Henryk A

    2016-01-12

    We present a novel density-functional tight-binding (DFTB) parametrization toolkit developed to optimize the parameters of various DFTB models in a fully automatized fashion. The main features of the algorithm, based on the particle swarm optimization technique, are discussed, and a number of initial pilot applications of the developed methodology to molecular and solid systems are presented. PMID:26587758

  3. Removal of Particles and Acid Gases (SO2 or HCl) with a Ceramic Filter by Addition of Dry Sorbents

    SciTech Connect

    Hemmer, G.; Kasper, G.; Wang, J.; Schaub, G.

    2002-09-20

    The present investigation intends to add to the fundamental process design know-how for dry flue gas cleaning, especially with respect to process flexibility, in cases where variations in the type of fuel and thus in concentration of contaminants in the flue gas require optimization of operating conditions. In particular, temperature effects of the physical and chemical processes occurring simultaneously in the gas-particle dispersion and in the filter cake/filter medium are investigated in order to improve the predictive capabilities for identifying optimum operating conditions. Sodium bicarbonate (NaHCO{sub 3}) and calcium hydroxide (Ca(OH){sub 2}) are known as efficient sorbents for neutralizing acid flue gas components such as HCl, HF, and SO{sub 2}. According to their physical properties (e.g. porosity, pore size) and chemical behavior (e.g. thermal decomposition, reactivity for gas-solid reactions), optimum conditions for their application vary widely. The results presented concentrate on the development of quantitative data for filtration stability and overall removal efficiency as affected by operating temperature. Experiments were performed in a small pilot unit with a ceramic filter disk of the type Dia-Schumalith 10-20 (Fig. 1, described in more detail in Hemmer 2002 and Hemmer et al. 1999), using model flue gases containing SO{sub 2} and HCl, flyash from wood bark combustion, and NaHCO{sub 3} as well as Ca(OH){sub 2} as sorbent material (particle size d{sub 50}/d{sub 84} : 35/192 {micro}m, and 3.5/16, respectively). The pilot unit consists of an entrained flow reactor (gas duct) representing the raw gas volume of a filter house and the filter disk with a filter cake, operating continuously, simulating filter cake build-up and cleaning of the filter medium by jet pulse. Temperatures varied from 200 to 600 C, sorbent stoichiometric ratios from zero to 2, inlet concentrations were on the order of 500 to 700 mg/m{sup 3}, water vapor contents ranged from

  4. Particle Clogging in Filter Media of Embankment Dams: A Numerical and Experimental Study

    NASA Astrophysics Data System (ADS)

    Antoun, T.; Kanarska, Y.; Ezzedine, S. M.; Lomov, I.; Glascoe, L. G.; Smith, J.; Hall, R. L.; Woodson, S. C.

    2013-12-01

    The safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique which enforces the correct in-domain computational boundary conditions inside and on the boundary of the particles. The numerical code is validated to experiments conducted at the US Army Corps of Engineering and Research Development Center (ERDC). These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under

  5. Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

    2002-01-01

    The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.

  6. Blended particle methods with adaptive subspaces for filtering turbulent dynamical systems

    NASA Astrophysics Data System (ADS)

    Qi, Di; Majda, Andrew J.

    2015-04-01

    It is a major challenge throughout science and engineering to improve uncertain model predictions by utilizing noisy data sets from nature. Hybrid methods combining the advantages of traditional particle filters and the Kalman filter offer a promising direction for filtering or data assimilation in high dimensional turbulent dynamical systems. In this paper, blended particle filtering methods that exploit the physical structure of turbulent dynamical systems are developed. Non-Gaussian features of the dynamical system are captured adaptively in an evolving-in-time low dimensional subspace through particle methods, while at the same time statistics in the remaining portion of the phase space are amended by conditional Gaussian mixtures interacting with the particles. The importance of both using the adaptively evolving subspace and introducing conditional Gaussian statistics in the orthogonal part is illustrated here by simple examples. For practical implementation of the algorithms, finding the most probable distributions that characterize the statistics in the phase space as well as effective resampling strategies is discussed to handle realizability and stability issues. To test the performance of the blended algorithms, the forty dimensional Lorenz 96 system is utilized with a five dimensional subspace to run particles. The filters are tested extensively in various turbulent regimes with distinct statistics and with changing observation time frequency and both dense and sparse spatial observations. In real applications perfect dynamical models are always inaccessible considering the complexities in both modeling and computation of high dimensional turbulent system. The effects of model errors from imperfect modeling of the systems are also checked for these methods. The blended methods show uniformly high skill in both capturing non-Gaussian statistics and achieving accurate filtering results in various dynamical regimes with and without model errors.

  7. Adaptive Particle Filter for Nonparametric Estimation with Measurement Uncertainty in Wireless Sensor Networks.

    PubMed

    Li, Xiaofan; Zhao, Yubin; Zhang, Sha; Fan, Xiaopeng

    2016-01-01

    Particle filters (PFs) are widely used for nonlinear signal processing in wireless sensor networks (WSNs). However, the measurement uncertainty makes the WSN observations unreliable to the actual case and also degrades the estimation accuracy of the PFs. In addition to the algorithm design, few works focus on improving the likelihood calculation method, since it can be pre-assumed by a given distribution model. In this paper, we propose a novel PF method, which is based on a new likelihood fusion method for WSNs and can further improve the estimation performance. We firstly use a dynamic Gaussian model to describe the nonparametric features of the measurement uncertainty. Then, we propose a likelihood adaptation method that employs the prior information and a belief factor to reduce the measurement noise. The optimal belief factor is attained by deriving the minimum Kullback-Leibler divergence. The likelihood adaptation method can be integrated into any PFs, and we use our method to develop three versions of adaptive PFs for a target tracking system using wireless sensor network. The simulation and experimental results demonstrate that our likelihood adaptation method has greatly improved the estimation performance of PFs in a high noise environment. In addition, the adaptive PFs are highly adaptable to the environment without imposing computational complexity. PMID:27249002

  8. Adaptive Particle Filter for Nonparametric Estimation with Measurement Uncertainty in Wireless Sensor Networks

    PubMed Central

    Li, Xiaofan; Zhao, Yubin; Zhang, Sha; Fan, Xiaopeng

    2016-01-01

    Particle filters (PFs) are widely used for nonlinear signal processing in wireless sensor networks (WSNs). However, the measurement uncertainty makes the WSN observations unreliable to the actual case and also degrades the estimation accuracy of the PFs. In addition to the algorithm design, few works focus on improving the likelihood calculation method, since it can be pre-assumed by a given distribution model. In this paper, we propose a novel PF method, which is based on a new likelihood fusion method for WSNs and can further improve the estimation performance. We firstly use a dynamic Gaussian model to describe the nonparametric features of the measurement uncertainty. Then, we propose a likelihood adaptation method that employs the prior information and a belief factor to reduce the measurement noise. The optimal belief factor is attained by deriving the minimum Kullback–Leibler divergence. The likelihood adaptation method can be integrated into any PFs, and we use our method to develop three versions of adaptive PFs for a target tracking system using wireless sensor network. The simulation and experimental results demonstrate that our likelihood adaptation method has greatly improved the estimation performance of PFs in a high noise environment. In addition, the adaptive PFs are highly adaptable to the environment without imposing computational complexity. PMID:27249002

  9. Quantum-Behaved Particle Swarm Optimization with Chaotic Search

    NASA Astrophysics Data System (ADS)

    Yang, Kaiqiao; Nomura, Hirosato

    The chaotic search is introduced into Quantum-behaved Particle Swarm Optimization (QPSO) to increase the diversity of the swarm in the latter period of the search, so as to help the system escape from local optima. Taking full advantages of the characteristics of ergodicity and randomicity of chaotic variables, the chaotic search is carried out in the neighborhoods of the particles which are trapped into local optima. The experimental results on test functions show that QPSO with chaotic search outperforms the Particle Swarm Optimization (PSO) and QPSO.

  10. Cardiac fiber tracking using adaptive particle filtering based on tensor rotation invariant in MRI

    NASA Astrophysics Data System (ADS)

    Kong, Fanhui; Liu, Wanyu; Magnin, Isabelle E.; Zhu, Yuemin

    2016-03-01

    Diffusion magnetic resonance imaging (dMRI) is a non-invasive method currently available for cardiac fiber tracking. However, accurate and efficient cardiac fiber tracking is still a challenge. This paper presents a probabilistic cardiac fiber tracking method based on particle filtering. In this framework, an adaptive sampling technique is presented to describe the posterior distribution of fiber orientations by adjusting the number and status of particles according to the fractional anisotropy of diffusion. An observation model is then proposed to update the weight of particles by rotating diffusion tensor from the primary eigenvector to a given fiber orientation while keeping the shape of the tensor invariant. The results on human cardiac dMRI show that the proposed method is robust to noise and outperforms conventional streamline and particle filtering techniques.

  11. On the application of optimal wavelet filter banks for ECG signal classification

    NASA Astrophysics Data System (ADS)

    Hadjiloucas, S.; Jannah, N.; Hwang, F.; Galvão, R. K. H.

    2014-03-01

    This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.

  12. A multiobjective memetic algorithm based on particle swarm optimization.

    PubMed

    Liu, Dasheng; Tan, K C; Goh, C K; Ho, W K

    2007-02-01

    In this paper, a new memetic algorithm (MA) for multiobjective (MO) optimization is proposed, which combines the global search ability of particle swarm optimization with a synchronous local search heuristic for directed local fine-tuning. A new particle updating strategy is proposed based upon the concept of fuzzy global-best to deal with the problem of premature convergence and diversity maintenance within the swarm. The proposed features are examined to show their individual and combined effects in MO optimization. The comparative study shows the effectiveness of the proposed MA, which produces solution sets that are highly competitive in terms of convergence, diversity, and distribution. PMID:17278557

  13. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  14. Accelerating Particle Filter Using Randomized Multiscale and Fast Multipole Type Methods.

    PubMed

    Shabat, Gil; Shmueli, Yaniv; Bermanis, Amit; Averbuch, Amir

    2015-07-01

    Particle filter is a powerful tool for state tracking using non-linear observations. We present a multiscale based method that accelerates the tracking computation by particle filters. Unlike the conventional way, which calculates weights over all particles in each cycle of the algorithm, we sample a small subset from the source particles using matrix decomposition methods. Then, we apply a function extension algorithm that uses a particle subset to recover the density function for all the rest of the particles not included in the chosen subset. The computational effort is substantial especially when multiple objects are tracked concurrently. The proposed algorithm significantly reduces the computational load. By using the Fast Gaussian Transform, the complexity of the particle selection step is reduced to a linear time in n and k, where n is the number of particles and k is the number of particles in the selected subset. We demonstrate our method on both simulated and on real data such as object tracking in video sequences. PMID:26352448

  15. Optimization of magnetic switches for single particle and cell transport

    NASA Astrophysics Data System (ADS)

    Abedini-Nassab, Roozbeh; Murdoch, David M.; Kim, CheolGi; Yellen, Benjamin B.

    2014-06-01

    The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.

  16. Optimization of magnetic switches for single particle and cell transport

    SciTech Connect

    Abedini-Nassab, Roozbeh; Yellen, Benjamin B.; Murdoch, David M.; Kim, CheolGi

    2014-06-28

    The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.

  17. Particle sorting in Filter Porous Media and in Sediment Transport: A Numerical and Experimental Study

    NASA Astrophysics Data System (ADS)

    Glascoe, L. G.; Ezzedine, S. M.; Kanarska, Y.; Lomov, I. N.; Antoun, T.; Smith, J.; Hall, R.; Woodson, S.

    2014-12-01

    Understanding the flow of fines, particulate sorting in porous media and fractured media during sediment transport is significant for industrial, environmental, geotechnical and petroleum technologies to name a few. For example, the safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique. The numerical code is validated to experiments conducted at the USACE and ERDC. These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. We have extended these results to more realistic heterogeneous

  18. Boundary filters for vector particles passing parity breaking domains

    SciTech Connect

    Kolevatov, S. S.; Andrianov, A. A.

    2014-07-23

    The electrodynamics supplemented with a Lorenz and CPT invariance violating Chern-Simons (CS) action (Carrol-Field-Jackiw electrodynamics) is studied when the parity-odd medium is bounded by a hyperplane separating it from the vacuum. The solutions in both half-spaces are carefully discussed and for space-like boundary stitched on the boundary with help of the Bogolubov transformations. The presence of two different Fock vacua is shown. The passage of photons and massive vector mesons through a boundary between the CS medium and the vacuum of conventional Maxwell electrodynamics is investigated. Effects of reflection from a boundary (up to the total one) are revealed when vector particles escape to vacuum and income from vacuum passing the boundary.

  19. Rheology behavior and optimal damping effect of granular particles in a non-obstructive particle damper

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Chen, Tianning; Wang, Xiaopeng; Fang, Jianglong

    2016-03-01

    To explore the optimal damping mechanism of non-obstructive particle dampers (NOPDs), research on the relationship between the damping performance of NOPDs and the motion mode of damping particles in NOPDs was carried out based on the rheological properties of vibrated granular particles. Firstly, the damping performance of NOPDs under different excitation intensity and gap clearance was investigated via cantilever system experiments, and an approximate evaluation of the effective mass and effective damping of NOPDs was performed by fitting the experimental data to an equivalent single-degree-of-freedom (SDOF) system with no damping particles. Then the phase diagrams which could show the motion mode of damping particles under different excitation intensity and gap clearance were obtained via a series of vibration table tests. Moreover, the dissipation characteristic of damping particles was explored by the discrete element method (DEM). The study results indicate that when NOPDs play the optimal damping effect the granular Leidenfrost effect whereby the entire particle bed in NOPDs is levitated above the vibrating base by a layer of highly energetic particles is observed. Finally, the damping characteristics of NOPDs was explained by collisions and frictions between particle-particle and particle-wall based on the rheology behavior of damping particles and a new dissipation mechanism was first proposed for the optimal damping performance of NOPDs.

  20. Empirical Determination of Optimal Parameters for Sodium Double-Edge Magneto-Optic Filters

    NASA Astrophysics Data System (ADS)

    Barry, Ian F.; Huang, Wentao; Smith, John A.; Chu, Xinzhao

    2016-06-01

    A method is proposed for determining the optimal temperature and magnetic field strength used to condition a sodium vapor cell for use in a sodium Double-Edge Magneto-Optic Filter (Na-DEMOF). The desirable characteristics of these filters are first defined and then analyzed over a range of temperatures and magnetic field strengths, using an IDL Faraday filter simulation adapted for the Na-DEMOF. This simulation is then compared to real behavior of a Na-DEMOF constructed for use with the Chu Research Group's STAR Na Doppler resonance-fluorescence lidar for lower atmospheric observations.

  1. Joint State and Parameter Estimation for Two Land Surface Models Using the Ensemble Kalman Filter and Particle Filter

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjuan; Hendricks-Franssen, Harrie-Jan; Han, Xujun; Vrugt, Jasper A.; Vereecken, Harry

    2016-04-01

    Land surface models (LSMs) resolve the water and energy balance with different parameters and state variables. Many of the parameters of these models cannot be measured directly in the field, and require calibration against flux and soil moisture data. Two LSMs are used in our work: Variable Infiltration Capacity Hydrologic Model (VIC) and the Community Land Model (CLM). Temporal variations in soil moisture content at 5, 20 and 50 cm depth in the Rollesbroich experimental watershed in Germany are simulated in both LSMs. Data assimilation (DA) provides a good way to jointly estimate soil moisture content and soil properties of the resolved soil domain. Four DA methods combined with the two LSMs are used in our work: the Ensemble Kalman Filter (EnKF) using state augmentation or dual estimation, the Residual Resampling Particle Filter (RRPF) and Markov chain Monte Carlo Particle Filter (MCMCPF). These four DA methods are tuned and calibrated for a five month period, and subsequently evaluated for another five month period. Performances of the two LSMs and the four DA methods are compared. Our results show that all DA methods improve the estimation of soil moisture content of the VIC and CLM models, especially if the soil hydraulic properties (VIC), the maximum baseflow velocity (VIC) and/or soil texture (CLM) are jointly estimated with soil moisture content. The augmentation and dual estimation methods performed slightly better than RRPF and MCMCPF in the evaluation period. The differences in simulated soil moisture content between CLM and VIC were larger than variations among the DA methods. The CLM performed better than the VIC model. The strong underestimation of soil moisture content in the third layer of the VIC model is likely related to an inadequate parameterization of groundwater drainage.

  2. Optimization of primer specific filter metrics for the assessment of mitochondrial DNA sequence data

    PubMed Central

    CURTIS, PAMELA C.; THOMAS, JENNIFER L.; PHILLIPS, NICOLE R.; ROBY, RHONDA K.

    2011-01-01

    Filter metrics are used as a quick assessment of sequence trace files in order to sort data into different categories, i.e. High Quality, Review, and Low Quality, without human intervention. The filter metrics consist of two numerical parameters for sequence quality assessment: trace score (TS) and contiguous read length (CRL). Primer specific settings for the TS and CRL were established using a calibration dataset of 2817 traces and validated using a concordance dataset of 5617 traces. Prior to optimization, 57% of the traces required manual review before import into a sequence analysis program, whereas after optimization only 28% of the traces required manual review. After optimization of primer specific filter metrics for mitochondrial DNA sequence data, an overall reduction of review of trace files translates into increased throughput of data analysis and decreased time required for manual review. PMID:21171863

  3. X-RAY FLUORESCENCE ANALYSIS OF FILTER-COLLECTED AEROSOL PARTICLES

    EPA Science Inventory

    X-ray fluorescence (XRF) has become an effective technique for determining the elemental content of aerosol samples. For quantitative analysis, the aerosol particles must be collected as uniform deposits on the surface of Teflon membrane filters. An energy dispersive XRF spectrom...

  4. High-efficiency particulate air filter test stand and aerosol generator for particle loading studies.

    PubMed

    Arunkumar, R; Hogancamp, Kristina U; Parsons, Michael S; Rogers, Donna M; Norton, Olin P; Nagel, Brian A; Alderman, Steven L; Waggoner, Charles A

    2007-08-01

    This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 30 x 30 x 29 cm(3) nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5 to 12 standard m(3)/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150 degrees C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7 standard m(3)/min, high mass concentrations (approximately 25 mg/m(3)) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160 nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions. PMID

  5. High-efficiency particulate air filter test stand and aerosol generator for particle loading studies

    NASA Astrophysics Data System (ADS)

    Arunkumar, R.; Hogancamp, Kristina U.; Parsons, Michael S.; Rogers, Donna M.; Norton, Olin P.; Nagel, Brian A.; Alderman, Steven L.; Waggoner, Charles A.

    2007-08-01

    This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 30×30×29cm3 nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5to12standardm3/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150°C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7standardm3/min, high mass concentrations (˜25mg/m3) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions.

  6. Markerless Human Motion Tracking Using Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization

    PubMed Central

    Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah

    2015-01-01

    The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches—Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims. PMID:25978493

  7. Markerless human motion tracking using hierarchical multi-swarm cooperative particle swarm optimization.

    PubMed

    Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah

    2015-01-01

    The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims. PMID:25978493

  8. A STUDY ON ASH PARTICLE DISTRIBUTION CHARACTERISITICS OF CANDLE FILTER SURFACE REGENERATION AT ROOM TEMPERATURE

    SciTech Connect

    Vasudevan, V.; Kang, B.S-J.; Johnson, E.K.

    2002-09-19

    Ceramic barrier filtration is a leading technology employed in hot gas filtration. Hot gases loaded with ash particle flow through the ceramic candle filters and deposit ash on their outer surface. The deposited ash is periodically removed using back pulse cleaning jet, known as surface regeneration. The cleaning done by this technique still leaves some residual ash on the filter surface, which over a period of time sinters, forms a solid cake and leads to mechanical failure of the candle filter. A room temperature testing facility (RTTF) was built to gain more insight into the surface regeneration process before testing commenced at high temperature. RTTF was instrumented to obtain pressure histories during the surface regeneration process and a high-resolution high-speed imaging system was integrated in order to obtain pictures of the surface regeneration process. The objective of this research has been to utilize the RTTF to study the surface regeneration process at the convenience of room temperature conditions. The face velocity of the fluidized gas, the regeneration pressure of the back pulse and the time to build up ash on the surface of the candle filter were identified as the important parameters to be studied. Two types of ceramic candle filters were used in the study. Each candle filter was subjected to several cycles of ash build-up followed by a thorough study of the surface regeneration process at different parametric conditions. The pressure histories in the chamber and filter system during build-up and regeneration were then analyzed. The size distribution and movement of the ash particles during the surface regeneration process was studied. Effect of each of the parameters on the performance of the regeneration process is presented. A comparative study between the two candle filters with different characteristics is presented.

  9. Integration of GPS Precise Point Positioning and MEMS-Based INS Using Unscented Particle Filter

    PubMed Central

    Abd Rabbou, Mahmoud; El-Rabbany, Ahmed

    2015-01-01

    Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) integrated system involves nonlinear motion state and measurement models. However, the extended Kalman filter (EKF) is commonly used as the estimation filter, which might lead to solution divergence. This is usually encountered during GPS outages, when low-cost micro-electro-mechanical sensors (MEMS) inertial sensors are used. To enhance the navigation system performance, alternatives to the standard EKF should be considered. Particle filtering (PF) is commonly considered as a nonlinear estimation technique to accommodate severe MEMS inertial sensor biases and noise behavior. However, the computation burden of PF limits its use. In this study, an improved version of PF, the unscented particle filter (UPF), is utilized, which combines the unscented Kalman filter (UKF) and PF for the integration of GPS precise point positioning and MEMS-based inertial systems. The proposed filter is examined and compared with traditional estimation filters, namely EKF, UKF and PF. Tightly coupled mechanization is adopted, which is developed in the raw GPS and INS measurement domain. Un-differenced ionosphere-free linear combinations of pseudorange and carrier-phase measurements are used for PPP. The performance of the UPF is analyzed using a real test scenario in downtown Kingston, Ontario. It is shown that the use of UPF reduces the number of samples needed to produce an accurate solution, in comparison with the traditional PF, which in turn reduces the processing time. In addition, UPF enhances the positioning accuracy by up to 15% during GPS outages, in comparison with EKF. However, all filters produce comparable results when the GPS measurement updates are available. PMID:25815446

  10. Integration of GPS precise point positioning and MEMS-based INS using unscented particle filter.

    PubMed

    Abd Rabbou, Mahmoud; El-Rabbany, Ahmed

    2015-01-01

    Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) integrated system involves nonlinear motion state and measurement models. However, the extended Kalman filter (EKF) is commonly used as the estimation filter, which might lead to solution divergence. This is usually encountered during GPS outages, when low-cost micro-electro-mechanical sensors (MEMS) inertial sensors are used. To enhance the navigation system performance, alternatives to the standard EKF should be considered. Particle filtering (PF) is commonly considered as a nonlinear estimation technique to accommodate severe MEMS inertial sensor biases and noise behavior. However, the computation burden of PF limits its use. In this study, an improved version of PF, the unscented particle filter (UPF), is utilized, which combines the unscented Kalman filter (UKF) and PF for the integration of GPS precise point positioning and MEMS-based inertial systems. The proposed filter is examined and compared with traditional estimation filters, namely EKF, UKF and PF. Tightly coupled mechanization is adopted, which is developed in the raw GPS and INS measurement domain. Un-differenced ionosphere-free linear combinations of pseudorange and carrier-phase measurements are used for PPP. The performance of the UPF is analyzed using a real test scenario in downtown Kingston, Ontario. It is shown that the use of UPF reduces the number of samples needed to produce an accurate solution, in comparison with the traditional PF, which in turn reduces the processing time. In addition, UPF enhances the positioning accuracy by up to 15% during GPS outages, in comparison with EKF. However, all filters produce comparable results when the GPS measurement updates are available. PMID:25815446