An optimization-based parallel particle filter for multitarget tracking
NASA Astrophysics Data System (ADS)
Sutharsan, S.; Sinha, A.; Kirubarajan, T.; Farooq, M.
2005-09-01
Particle filter based estimation is becoming more popular because it has the capability to effectively solve nonlinear and non-Gaussian estimation problems. However, the particle filter has high computational requirements and the problem becomes even more challenging in the case of multitarget tracking. In order to perform data association and estimation jointly, typically an augmented state vector of target dynamics is used. As the number of targets increases, the computation required for each particle increases exponentially. Thus, parallelization is a possibility in order to achieve the real time feasibility in large-scale multitarget tracking applications. In this paper, we present a real-time feasible scheduling algorithm that minimizes the total computation time for the bus connected heterogeneous primary-secondary architecture. This scheduler is capable of selecting the optimal number of processors from a large pool of secondary processors and mapping the particles among the selected processors. Furthermore, we propose a less communication intensive parallel implementation of the particle filter without sacrificing tracking accuracy using an efficient load balancing technique, in which optimal particle migration is ensured. In this paper, we present the mathematical formulations for scheduling the particles as well as for particle migration via load balancing. Simulation results show the tracking performance of our parallel particle filter and the speedup achieved using parallelization.
Clever particle filters, sequential importance sampling and the optimal proposal
NASA Astrophysics Data System (ADS)
Snyder, Chris
2014-05-01
Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.
Algorithmic and architectural optimizations for computationally efficient particle filtering.
Sankaranarayanan, Aswin C; Srivastava, Ankur; Chellappa, Rama
2008-05-01
In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speed-up of the algorithm using the methodology proposed in the paper. PMID:18390378
Optimizing Parameters of Process-Based Terrestrial Ecosystem Model with Particle Filter
NASA Astrophysics Data System (ADS)
Ito, A.
2014-12-01
Present terrestrial ecosystem models still contain substantial uncertainties, as model intercomparison studies have shown, because of poor model constraint by observational data. So, development of advanced methodology of data-model fusion, or data-assimilation, is an important task to reduce the uncertainties and improve model predictability. In this study, I apply the Particle filter (or Sequential Monte Carlo filer) to optimize parameters of a process-based terrestrial ecosystem model (VISIT). The Particle filter is one of the data-assimilation methods, in which probability distribution of model state is approximated by many samples of parameter set (i.e., particle). This is a computationally intensive method and applicable to nonlinear systems; this is an advantage of the method in comparison with other techniques like Ensemble Kalman filter and variational method. At several sites, I used flux measurement data of atmosphere-ecosystem CO2 exchange in sequential and non-sequential manners. In the sequential data assimilation, a time-series data at 30-min or daily steps were used to optimize gas-exchange-related parameters; this method would be also effective to assimilate satellite observational data. On the other hand, in the non-sequential case, annual or long-term mean budget was adjusted to observations; this method would be also effective to assimilate carbon stock data. Although there remain technical issues (e.g., appropriate number of particles and likelihood function), I demonstrate that the Partile filter is an effective method of data-assimilation for process-based models, enhancing collaboration between field and model researchers.
Multilevel Ensemble Transform Particle Filtering
NASA Astrophysics Data System (ADS)
Gregory, Alastair; Cotter, Colin; Reich, Sebastian
2016-04-01
This presentation extends the Multilevel Monte Carlo variance reduction technique to nonlinear filtering. In particular, Multilevel Monte Carlo is applied to a certain variant of the particle filter, the Ensemble Transform Particle Filter (ETPF). A key aspect is the use of optimal transport methods to re-establish correlation between coarse and fine ensembles after resampling; this controls the variance of the estimator. Numerical examples present a proof of concept of the effectiveness of the proposed method, demonstrating significant computational cost reductions (relative to the single-level ETPF counterpart) in the propagation of ensembles.
Research on improved mechanism for particle filter
NASA Astrophysics Data System (ADS)
Yu, Jinxia; Xu, Jingmin; Tang, Yongli; Zhao, Qian
2013-03-01
Based on the analysis of particle filter algorithm, two improved mechanism are studied so as to improve the performance of particle filter. Firstly, hybrid proposal distribution with annealing parameter is studied in order to use current information of the latest observed measurement to optimize particle filter. Then, resampling step in particle filter is improved by two methods which are based on partial stratified resampling (PSR). One is that it uses the optimal idea to improve the weights after implementing PSR, and the other is that it uses the optimal idea to improve the weights before implementing PSR and uses adaptive mutation operation for all particles so as to assure the diversity of particle sets after PSR. At last, the simulations based on single object tracking are implemented, and the performances of the improved mechanism for particle filter are estimated.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Wang, Zhenzhan; Shi, Hanqing; Long, Zhiyong; Du, Huadong
2016-08-01
This paper established a geophysical retrieval algorithm for sea surface wind vector, sea surface temperature, columnar atmospheric water vapor, and columnar cloud liquid water from WindSat, using the measured brightness temperatures and a matchup database. To retrieve the wind vector, a chaotic particle swarm approach was used to determine a set of possible wind vector solutions which minimize the difference between the forward model and the WindSat observations. An adjusted circular median filtering function was adopted to remove wind direction ambiguity. The validation of the wind speed, wind direction, sea surface temperature, columnar atmospheric water vapor, and columnar liquid cloud water indicates that this algorithm is feasible and reasonable and can be used to retrieve these atmospheric and oceanic parameters. Compared with moored buoy data, the RMS errors for wind speed and sea surface temperature were 0.92 m s-1 and 0.88°C, respectively. The RMS errors for columnar atmospheric water vapor and columnar liquid cloud water were 0.62 mm and 0.01 mm, respectively, compared with F17 SSMIS results. In addition, monthly average results indicated that these parameters are in good agreement with AMSR-E results. Wind direction retrieval was studied under various wind speed conditions and validated by comparing to the QuikSCAT measurements, and the RMS error was 13.3°. This paper offers a new approach to the study of ocean wind vector retrieval using a polarimetric microwave radiometer.
Bounds on the performance of particle filters
NASA Astrophysics Data System (ADS)
Snyder, C.; Bengtsson, T.
2014-12-01
Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. A persistent issue with all particle filters is degeneracy of the importance weights, where one or a few particles receive almost all the weight. Considering single-step filters such as the equivalent-weights or implicit particle filters (that is, those in which the particles and weights at time tk depend only on the observations at tk and the particles and weights at tk-1), two results provide a bound on their performance. First, the optimal proposal minimizes the variance of the importance weights not only over draws of the particles at tk, but also over draws from the joint proposal for tk-1 and tk. This shows that a particle filter using the optimal proposal will have minimal degeneracy relative to all other single-step filters. Second, the asymptotic results of Bengtsson et al. (2008) and Snyder et al. (2008) also hold rigorously for the optimal proposal in the case of linear, Gaussian systems. The number of particles necessary to avoid degeneracy must increase exponentially with the variance of the incremental importance weights. In the simplest examples, that variance is proportional to the dimension of the system, though in general it depends on other factors, including the characteristics of the observing network. A rough estimate indicates that single-step particle filter applied to global numerical weather prediction will require very large numbers of particles.
NASA Astrophysics Data System (ADS)
Stevens, Mark R.; Gutchess, Dan; Checka, Neal; Snorrason, Magnús
2006-05-01
Image exploitation algorithms for Intelligence, Surveillance and Reconnaissance (ISR) and weapon systems are extremely sensitive to differences between the operating conditions (OCs) under which they are trained and the extended operating conditions (EOCs) in which the fielded algorithms are tested. As an example, terrain type is an important OC for the problem of tracking hostile vehicles from an airborne camera. A system designed to track cars driving on highways and on major city streets would probably not do well in the EOC of parking lots because of the very different dynamics. In this paper, we present a system we call ALPS for Adaptive Learning in Particle Systems. ALPS takes as input a sequence of video images and produces labeled tracks. The system detects moving targets and tracks those targets across multiple frames using a multiple hypothesis tracker (MHT) tightly coupled with a particle filter. This tracker exploits the strengths of traditional MHT based tracking algorithms by directly incorporating tree-based hypothesis considerations into the particle filter update and resampling steps. We demonstrate results in a parking lot domain tracking objects through occlusions and object interactions.
Nonlinear optimal semirecursive filtering
NASA Astrophysics Data System (ADS)
Daum, Frederick E.
1996-05-01
This paper describes a new hybrid approach to filtering, in which part of the filter is recursive but another part in non-recursive. The practical utility of this notion is to reduce computational complexity. In particular, if the non- recursive part of the filter is sufficiently small, then such a filter might be cost-effective to run in real-time with computer technology available now or in the future.
Optimization of integrated polarization filters.
Gagnon, Denis; Dumont, Joey; Déziel, Jean-Luc; Dubé, Louis J
2014-10-01
This study reports on the design of small footprint, integrated polarization filters based on engineered photonic lattices. Using a rods-in-air lattice as a basis for a TE filter and a holes-in-slab lattice for the analogous TM filter, we are able to maximize the degree of polarization of the output beams up to 98% with a transmission efficiency greater than 75%. The proposed designs allow not only for logical polarization filtering, but can also be tailored to output an arbitrary transverse beam profile. The lattice configurations are found using a recently proposed parallel tabu search algorithm for combinatorial optimization problems in integrated photonics. PMID:25360980
OPTIMIZATION OF ADVANCED FILTER SYSTEMS
R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert
1998-04-30
Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
Particle flow for nonlinear filters with log-homotopy
NASA Astrophysics Data System (ADS)
Daum, Fred; Huang, Jim
2008-04-01
We describe a new nonlinear filter that is vastly superior to the classic particle filter. In particular, the computational complexity of the new filter is many orders of magnitude less than the classic particle filter with optimal estimation accuracy for problems with dimension greater than 2 or 3. We consider nonlinear estimation problems with dimensions varying from 1 to 20 that are smooth and fully coupled (i.e. dense not sparse). The new filter implements Bayes' rule using particle flow rather than with a pointwise multiplication of two functions; this avoids one of the fundamental and well known problems in particle filters, namely "particle collapse" as a result of Bayes' rule. We use a log-homotopy to derive the ODE that describes particle flow. This paper was written for normal engineers, who do not have homotopy for breakfast.
Distributed SLAM Using Improved Particle Filter for Mobile Robot Localization
Pei, Fujun; Wu, Mei; Zhang, Simin
2014-01-01
The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness. PMID:24883362
Distributed SLAM using improved particle filter for mobile robot localization.
Pei, Fujun; Wu, Mei; Zhang, Simin
2014-01-01
The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness. PMID:24883362
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry
Westinghouse Advanced Particle Filter System
Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.; Bachovchin, D.M.
1996-12-31
Integrated Gasification Combined Cycles (IGCC) and Pressurized Fluidized Bed Combustion (PFBC) are being developed and demonstrated for commercial, power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC and PFBC in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of PFBC and IGCC systems. This paper reports on the development and status of testing of the Westinghouse Advanced Hot Gas Particle Filter (W-APF) including: W-APF integrated operation with the American Electric Power, 70 MW PFBC clean coal facility--approximately 6000 test hours completed; approximately 2500 hours of testing at the Hans Ahlstrom 10 MW PCFB facility located in Karhula, Finland; over 700 hours of operation at the Foster Wheeler 2 MW 2nd generation PFBC facility located in Livingston, New Jersey; status of Westinghouse HGF supply for the DOE Southern Company Services Power System Development Facility (PSDF) located in Wilsonville, Alabama; the status of the Westinghouse development and testing of HGF`s for Biomass Power Generation; and the status of the design and supply of the HGF unit for the 95 MW Pinon Pine IGCC Clean Coal Demonstration.
System and Apparatus for Filtering Particles
NASA Technical Reports Server (NTRS)
Agui, Juan H. (Inventor); Vijayakumar, Rajagopal (Inventor)
2015-01-01
A modular pre-filtration apparatus may be beneficial to extend the life of a filter. The apparatus may include an impactor that can collect a first set of particles in the air, and a scroll filter that can collect a second set of particles in the air. A filter may follow the pre-filtration apparatus, thus causing the life of the filter to be increased.
Angle only tracking with particle flow filters
NASA Astrophysics Data System (ADS)
Daum, Fred; Huang, Jim
2011-09-01
We show the results of numerical experiments for tracking ballistic missiles using only angle measurements. We compare the performance of an extended Kalman filter with a new nonlinear filter using particle flow to compute Bayes' rule. For certain difficult geometries, the particle flow filter is an order of magnitude more accurate than the EKF. Angle only tracking is of interest in several different sensors; for example, passive optics and radars in which range and Doppler data are spoiled by jamming.
Early maritime applications of particle filtering
NASA Astrophysics Data System (ADS)
Richardson, Henry R.; Stone, Lawrence D.; Monach, W. Reynolds; Discenza, Joseph H.
2003-12-01
This paper provides a brief history of some operational particle filters that were used by the U. S. Coast Guard and U. S. Navy. Starting in 1974 the Coast Guard system provided Search and Rescue Planning advice for objects lost at sea. The Navy systems were used to plan searches for Soviet submarines in the Atlantic, Pacific, and Mediterranean starting in 1972. The systems operated in a sequential, Bayesian manner. A prior distribution for the target"s location and movement was produced using both objective and subjective information. Based on this distribution, the search assets available, and their detection characteristics, a near-optimal search was planned. Typically, this involved visual searches by Coast Guard aircraft and sonobuoy searches by Navy antisubmarine warfare patrol aircraft. The searches were executed, and the feedback, both detections and lack of detections, was fed into a particle filter to produce the posterior distribution of the target"s location. This distribution was used as the prior for the next iteration of planning and search.
Early maritime applications of particle filtering
NASA Astrophysics Data System (ADS)
Richardson, Henry R.; Stone, Lawrence D.; Monach, W. Reynolds; Discenza, Joseph H.
2004-01-01
This paper provides a brief history of some operational particle filters that were used by the U. S. Coast Guard and U. S. Navy. Starting in 1974 the Coast Guard system provided Search and Rescue Planning advice for objects lost at sea. The Navy systems were used to plan searches for Soviet submarines in the Atlantic, Pacific, and Mediterranean starting in 1972. The systems operated in a sequential, Bayesian manner. A prior distribution for the target"s location and movement was produced using both objective and subjective information. Based on this distribution, the search assets available, and their detection characteristics, a near-optimal search was planned. Typically, this involved visual searches by Coast Guard aircraft and sonobuoy searches by Navy antisubmarine warfare patrol aircraft. The searches were executed, and the feedback, both detections and lack of detections, was fed into a particle filter to produce the posterior distribution of the target"s location. This distribution was used as the prior for the next iteration of planning and search.
OPTIMIZATION OF ADVANCED FILTER SYSTEMS
R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar
2002-06-30
Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program
Westinghouse advanced particle filter system
Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.
1995-11-01
Integrated Gasification Combined Cycles (IGCC), Pressurized Fluidized Bed Combustion (PFBC) and Advanced PFBC (APFB) are being developed and demonstrated for commercial power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC, PFBC and APFB in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of these advanced, solid fuel power generation cycles.
Optimal rate filters for biomedical point processes.
McNames, James
2005-01-01
Rate filters are used to estimate the mean event rate of many biomedical signals that can be modeled as point processes. Historically these filters have been designed using principles from two distinct fields. Signal processing principles are used to optimize the filter's frequency response. Kernel estimation principles are typically used to optimize the asymptotic statistical properties. This paper describes a design methodology that combines these principles from both fields to optimize the frequency response subject to constraints on the filter's order, symmetry, time-domain ripple, DC gain, and minimum impulse response. Initial results suggest that time-domain ripple and a negative impulse response are necessary to design a filter with a reasonable frequency response. This suggests that some of the common assumptions about the properties of rate filters should be reconsidered. PMID:17282132
Westinghouse advanced particle filter system
Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.
1994-10-01
Integrated Gasification Combined Cycles (IGCC) and Pressurized Fluidized Bed Combustion (PFBC) are being developed and demonstrated for commercial, power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC and PFBC in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of PFBC and IGCC systems. This paper updates the assessment of the Westinghouse hot gas filter design based on ongoing testing and analysis. Results are summarized from recent computational fluid dynamics modeling of the plenum flow during back pulse, analysis of candle stressing under cleaning and process transient conditions and testing and analysis to evaluate potential flow induced candle vibration.
Adaptive Mallow's optimization for weighted median filters
NASA Astrophysics Data System (ADS)
Rachuri, Raghu; Rao, Sathyanarayana S.
2002-05-01
This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.
Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer
2015-01-01
The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m−2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m−2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037
NASA Astrophysics Data System (ADS)
Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.
2012-12-01
The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.
Optimal multiobjective design of digital filters using spiral optimization technique.
Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid
2013-01-01
The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108
Testing particle filters on convective scale dynamics
NASA Astrophysics Data System (ADS)
Haslehner, Mylene; Craig, George. C.; Janjic, Tijana
2014-05-01
Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical
A comparison of EAKF and particle filter: towards a ensemble adjustment Kalman particle filter
NASA Astrophysics Data System (ADS)
Zhang, Xiangming; Shen, Zheqi; Tang, Youmin
2016-04-01
Bayesian estimation theory provides a general approach for the state estimate. In this study, we first explore two Bayesian-based methods: ensemble adjustment Kalman filter (EAKF) and sequential importance resampling particle filter (SIR-PF), using a well-known nonlinear and non-Gaussian model (Lorenz '63 model). The EAKF can be regarded as a deterministic scheme of the ensemble Kalman filter (EnKF), which performs better than the classical (stochastic) EnKF in a general framework. Comparison between the SIR-PF and the EAKF reveals that the former outperforms the latter if ensemble size is very large that can avoid the filter degeneracy, and vice versa. On the basis of comparisons between the SIR-PF and the EAKF, a mixture filter, called ensemble adjustment Kalman particle filter (EAKPF), is proposed to combine their both merits. Similar to the ensemble Kalman particle filter, which combines the stochastic EnKF and SIR-PF analysis schemes with a tuning parameter, the new mixture filter essentially provides a continuous interpolation between the EAKF and SIR-PF. The same Lorenz '63 model is used as a testbed, showing that the EAKPF is able to overcome filter degeneracy while maintaining the non-Gaussian nature, and performs better than the EAKF given limited ensemble size.
Desensitized Optimal Filtering and Sensor Fusion Toolkit
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.
2015-01-01
Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
NASA Technical Reports Server (NTRS)
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the
Particle filter-based track before detect algorithms
NASA Astrophysics Data System (ADS)
Boers, Yvo; Driessen, Hans
2003-12-01
In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.
Particle filter-based track before detect algorithms
NASA Astrophysics Data System (ADS)
Boers, Yvo; Driessen, Hans
2004-01-01
In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.
Factored interval particle filtering for gait analysis.
Saboune, Jamal; Rose, Cédric; Charpillet, François
2007-01-01
Commercial gait analysis systems rely on wearable sensors. The goal of this study is to develop a low cost marker less human motion capture tool. Our method is based on the estimation of 3d movements using video streams and the projection of a 3d human body model. Dynamic parameters only depend on human body movement constraints. No trained gait model is used which makes this approach generic. The 3d model is characterized by the angular positions of its articulations. The kinematic chain structure allows to factor the state vector representing the configuration of the model. We use a dynamic bayesian network and a modified particle filtering algorithm to estimate the most likely state configuration given an observation sequence. The modified algorithm takes advantage of the factorization of the state vector for efficiently weighting and resampling the particles. PMID:18002684
GNSS data filtering optimization for ionospheric observation
NASA Astrophysics Data System (ADS)
D'Angelo, G.; Spogli, L.; Cesaroni, C.; Sgrigna, V.; Alfonsi, L.; Aquino, M. H. O.
2015-12-01
In the last years, the use of GNSS (Global Navigation Satellite Systems) data has been gradually increasing, for both scientific studies and technological applications. High-rate GNSS data, able to generate and output 50-Hz phase and amplitude samples, are commonly used to study electron density irregularities within the ionosphere. Ionospheric irregularities may cause scintillations, which are rapid and random fluctuations of the phase and the amplitude of the received GNSS signals. For scintillation analysis, usually, GNSS signals observed at an elevation angle lower than an arbitrary threshold (usually 15°, 20° or 30°) are filtered out, to remove the possible error sources due to the local environment where the receiver is deployed. Indeed, the signal scattered by the environment surrounding the receiver could mimic ionospheric scintillation, because buildings, trees, etc. might create diffusion, diffraction and reflection. Although widely adopted, the elevation angle threshold has some downsides, as it may under or overestimate the actual impact of multipath due to local environment. Certainly, an incorrect selection of the field of view spanned by the GNSS antenna may lead to the misidentification of scintillation events at low elevation angles. With the aim to tackle the non-ionospheric effects induced by multipath at ground, in this paper we introduce a filtering technique, termed SOLIDIFY (Standalone OutLiers IDentIfication Filtering analYsis technique), aiming at excluding the multipath sources of non-ionospheric origin to improve the quality of the information obtained by the GNSS signal in a given site. SOLIDIFY is a statistical filtering technique based on the signal quality parameters measured by scintillation receivers. The technique is applied and optimized on the data acquired by a scintillation receiver located at the Istituto Nazionale di Geofisica e Vulcanologia, in Rome. The results of the exercise show that, in the considered case of a noisy
Constrained filter optimization for subsurface landmine detection
NASA Astrophysics Data System (ADS)
Torrione, Peter A.; Collins, Leslie; Clodfelter, Fred; Lulich, Dan; Patrikar, Ajay; Howard, Peter; Weaver, Richard; Rosen, Erik
2006-05-01
Previous large-scale blind tests of anti-tank landmine detection utilizing the NIITEK ground penetrating radar indicated the potential for very high anti-tank landmine detection probabilities at very low false alarm rates for algorithms based on adaptive background cancellation schemes. Recent data collections under more heterogeneous multi-layered road-scenarios seem to indicate that although adaptive solutions to background cancellation are effective, the adaptive solutions to background cancellation under different road conditions can differ significantly, and misapplication of these adaptive solutions can reduce landmine detection performance in terms of PD/FAR. In this work we present a framework for the constrained optimization of background-estimation filters that specifically seeks to optimize PD/FAR performance as measured by the area under the ROC curve between two FARs. We also consider the application of genetic algorithms to the problem of filter optimization for landmine detection. Results indicate robust results for both static and adaptive background cancellation schemes, and possible real-world advantages and disadvantages of static and adaptive approaches are discussed.
On optimal filtering of measured Mueller matrices
NASA Astrophysics Data System (ADS)
Gil, José J.
2016-07-01
While any two-dimensional mixed state of polarization of light can be represented by a combination of a pure state and a fully random state, any Mueller matrix can be represented by a convex combination of a pure component and three additional components whose randomness is scaled in a proper and objective way. Such characteristic decomposition constitutes the appropriate framework for the characterization of the polarimetric randomness of the system represented by a given Mueller matrix, and provides criteria for the optimal filtering of noise in experimental polarimetry.
Groupwise surface correspondence using particle filtering
NASA Astrophysics Data System (ADS)
Li, Guangxu; Kim, Hyoungseop; Tan, Joo Kooi; Ishikawa, Seiji
2015-03-01
To obtain an effective interpretation of organic shape using statistical shape models (SSMs), the correspondence of the landmarks through all the training samples is the most challenging part in model building. In this study, a coarse-tofine groupwise correspondence method for 3-D polygonal surfaces is proposed. We manipulate a reference model in advance. Then all the training samples are mapped to a unified spherical parameter space. According to the positions of landmarks of the reference model, the candidate regions for correspondence are chosen. Finally we refine the perceptually correct correspondences between landmarks using particle filter algorithm, where the likelihood of local surface features are introduced as the criterion. The proposed method was performed on the correspondence of 9 cases of left lung training samples. Experimental results show the proposed method is flexible and under-constrained.
Optimal edge filters explain human blur detection.
McIlhagga, William H; May, Keith A
2012-01-01
Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur. PMID:22984222
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Gebremichael, M.; LEE, H.; Hopson, T. M.
2012-12-01
Hydrologic data assimilation techniques provide a means to improve river discharge forecasts through updating hydrologic model states and correcting the atmospheric forcing data via optimally combining model outputs with observations. The performance of the assimilation procedure, however, depends on the data assimilation techniques used and the amount of uncertainty in the data sets. To investigate the effects of these, we comparatively evaluate three data assimilation techniques, including ensemble Kalman filter (EnKF), particle filter (PF) and variational (VAR) technique, which assimilate discharge and synthetic soil moisture data at various uncertainty levels into the Sacramento Soil Moisture accounting (SAC-SMA) model used by the National Weather Service (NWS) for river forecasting in The United States. The study basin is Greens Bayou watershed with area of 178 km2 in eastern Texas. In the presentation, we summarize the results of the comparisons, and discuss the challenges of applying each technique for hydrologic applications.
Online maintaining appearance model using particle filter
NASA Astrophysics Data System (ADS)
Chen, Siying; Lan, Tian; Wang, Jianyu; Ni, Guoqiang
2008-03-01
Tracking by foreground matching heavily depends on the appearance model to establish object correspondences among frames and essentially, the appearance model should encode both the difference part between the object and background to guarantee the robustness and the stable part to ensure tracking consistency. This paper provides a solution for online maintaining appearance models by adjusting features in the model. Object appearance is co-modeled by a subset of Haar features selected from the over-complete feature dictionary which encodes the discriminative part of object appearance and the color histogram which describes the stable appearance. During the particle filtering process, feature values both from background patches and object observations are sampled efficiently by the aid of "foreground" and "background" particles respectively. Based on these sampled values, top-ranked discriminative features are added and invalid features are removed out to ensure the object being distinguishable from current background according to the evolving appearance model. The tracker based on this online appearance model maintaining technique has been tested on people and car tracking tasks and promising experimental results are obtained.
A backtracking algorithm that deals with particle filter degeneracy
NASA Astrophysics Data System (ADS)
Baarsma, Rein; Schmitz, Oliver; Karssenberg, Derek
2016-04-01
Particle filters are an excellent way to deal with stochastic models incorporating Bayesian data assimilation. While they are computationally demanding, the particle filter has no problem with nonlinearity and it accepts non-Gaussian observational data. In the geoscientific field it is this computational demand that creates a problem, since dynamic grid-based models are often already quite computationally demanding. As such it is of the utmost importance to keep the amount of samples in the filter as small as possible. Small sample populations often lead to filter degeneracy however, especially in models with high stochastic forcing. Filter degeneracy renders the sample population useless, as the population is no longer statistically informative. We have created an algorithm in an existing data assimilation framework that reacts to and deals with filter degeneracy based on Spiller et al. [2008]. During the Bayesian updating step of the standard particle filter, the algorithm tests the sample population for filter degeneracy. If filter degeneracy has occurred, the algorithm resets to the last time the filter did work correctly and recalculates the failed timespan of the filter with an increased sample population. The sample population is then reduced to its original size and the particle filter continues as normal. This algorithm was created in the PCRaster Python framework, an open source tool that enables spatio-temporal forward modelling in Python [Karssenberg et al., 2010] . The framework already contains several data assimilation algorithms, including a standard particle filter and a Kalman filter. The backtracking particle filter algorithm has been added to the framework, which will make it easy to implement in other research. The performance of the backtracking particle filter is tested against a standard particle filter using two models. The first is a simple nonlinear point model, and the second is a more complex geophysical model. The main testing
Bayesian auxiliary particle filters for estimating neural tuning parameters.
Mountney, John; Sobel, Marc; Obeid, Iyad
2009-01-01
A common challenge in neural engineering is to track the dynamic parameters of neural tuning functions. This work introduces the application of Bayesian auxiliary particle filters for this purpose. Based on Monte-Carlo filtering, Bayesian auxiliary particle filters use adaptive methods to model the prior densities of the state parameters being tracked. The observations used are the neural firing times, modeled here as a Poisson process, and the biological driving signal. The Bayesian auxiliary particle filter was evaluated by simultaneously tracking the three parameters of a hippocampal place cell and compared to a stochastic state point process filter. It is shown that Bayesian auxiliary particle filters are substantially more accurate and robust than alternative methods of state parameter estimation. The effects of time-averaging on parameter estimation are also evaluated. PMID:19963911
Optimization of phononic filters via genetic algorithms
NASA Astrophysics Data System (ADS)
Hussein, M. I.; El-Beltagy, M. A.
2007-12-01
A phononic crystal is commonly characterized by its dispersive frequency spectrum. With appropriate spatial distribution of the constituent material phases, spectral stop bands could be generated. Moreover, it is possible to control the number, the width, and the location of these bands within a frequency range of interest. This study aims at exploring the relationship between unit cell configuration and frequency spectrum characteristics. Focusing on 1D layered phononic crystals, and longitudinal wave propagation in the direction normal to the layering, the unit cell features of interest are the number of layers and the material phase and relative thickness of each layer. An evolutionary search for binary- and ternary-phase cell designs exhibiting a series of stop bands at predetermined frequencies is conducted. A specially formulated representation and set of genetic operators that break the symmetries in the problem are developed for this purpose. An array of optimal designs for a range of ratios in Young's modulus and density are obtained and the corresponding objective values (the degrees to which the resulting bands match the predetermined targets) are examined as a function of these ratios. It is shown that a rather complex filtering objective could be met with a high degree of success. Structures composed of the designed phononic crystals are excellent candidates for use in a wide range of applications including sound and vibration filtering.
Metal finishing wastewater pressure filter optimization
Norford, S.W.; Diener, G.A.; Martin, H.L.
1992-12-31
The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.
Metal finishing wastewater pressure filter optimization
Norford, S.W.; Diener, G.A.; Martin, H.L.
1992-01-01
The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.
Human-manipulator interface using particle filter.
Du, Guanglong; Zhang, Ping; Wang, Xueqian
2014-01-01
This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator. PMID:24757430
Tractable particle filters for robot fault diagnosis
NASA Astrophysics Data System (ADS)
Verma, Vandi
Experience has shown that even carefully designed and tested robots may encounter anomalous situations. It is therefore important for robots to monitor their state so that anomalous situations may be detected in a timely manner. Robot fault diagnosis typically requires tracking a very large number of possible faults in complex non-linear dynamic systems with noisy sensors. Traditional methods either ignore the uncertainly or use linear approximations of nonlinear system dynamics. Such approximations are often unrealistic, and as a result faults either go undetected or become confused with non-fault conditions. Probability theory provides a natural representation for uncertainty, but an exact Bayesian solution for the diagnosis problem is intractable. Classical Monte Carlo methods, such as particle filters, suffer from substantial computational complexity. This is particularly true with the presence of rare, yet important events, such as many system faults. The thesis presents a set of complementary algorithms that provide an approach for computationally tractable fault diagnosis. These algorithms leverage probabilistic approaches to decision theory and information theory to efficiently track a large number of faults in a general dynamic system with noisy measurements. The problem of fault diagnosis is represented as hybrid (discrete/continuous) state estimation. Taking advantage of structure in the domain it dynamically concentrates computation in the regions of state space that are currently most relevant without losing track of less likely states. Experiments with a dynamic simulation of a six-wheel rocker-bogie rover show a significant improvement in performance over the classical approach.
Human-Manipulator Interface Using Particle Filter
Wang, Xueqian
2014-01-01
This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator. PMID:24757430
Blended particle filters for large-dimensional chaotic dynamical systems.
Majda, Andrew J; Qi, Di; Sapsis, Themistoklis P
2014-05-27
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886
Gao, Shuang; Kim, Jinyong; Yermakov, Michael; Elmashae, Yousef; He, Xinjian; Reponen, Tiina; Grinshpun, Sergey A
2015-01-01
Filtering facepiece respirators (FFRs) are commonly worn by first responders, first receivers, and other exposed groups to protect against exposure to airborne particles, including those originated by combustion. Most of these FFRs are NIOSH-certified (e.g., N95-type) based on the performance testing of their filters against charge-equilibrated aerosol challenges, e.g., NaCl. However, it has not been examined if the filtration data obtained with the NaCl-challenged FFR filters adequately represent the protection against real aerosol hazards such as combustion particles. A filter sample of N95 FFR mounted on a specially designed holder was challenged with NaCl particles and three combustion aerosols generated in a test chamber by burning wood, paper, and plastic. The concentrations upstream (Cup) and downstream (Cdown) of the filter were measured with a TSI P-Trak condensation particle counter and a Grimm Nanocheck particle spectrometer. Penetration was determined as (Cdown/Cup) ×100%. Four test conditions were chosen to represent inhalation flows of 15, 30, 55, and 85 L/min. Results showed that the penetration values of combustion particles were significantly higher than those of the "model" NaCl particles (p < 0.05), raising a concern about applicability of the N95 filters performance obtained with the NaCl aerosol challenge to protection against combustion particles. Aerosol type, inhalation flow rate and particle size were significant (p < 0.05) factors affecting the performance of the N95 FFR filter. In contrast to N95 filters, the penetration of combustion particles through R95 and P95 FFR filters (were tested in addition to N95) were not significantly higher than that obtained with NaCl particles. The findings were attributed to several effects, including the degradation of an N95 filter due to hydrophobic organic components generated into the air by combustion. Their interaction with fibers is anticipated to be similar to those involving "oily" particles
Symmetric Phase-Only Filtering in Particle-Image Velocimetry
NASA Technical Reports Server (NTRS)
Wemet, Mark P.
2008-01-01
and second- image subregions are normalized by the square roots of their respective magnitudes. This scheme yields optimal performance because the amounts of normalization applied to the spatial-frequency contents of the input and filter scenes are just enough to enhance their high-spatial-frequency contents while reducing their spurious low-spatial-frequency content. As a result, in SPOF PIV processing, particle-displacement correlation peaks can readily be detected above spurious background peaks, without need for masking or background subtraction.
Simultaneous Eye Tracking and Blink Detection with Interactive Particle Filters
NASA Astrophysics Data System (ADS)
Wu, Junwen; Trivedi, Mohan M.
2007-12-01
We present a system that simultaneously tracks eyes and detects eye blinks. Two interactive particle filters are used for this purpose, one for the closed eyes and the other one for the open eyes. Each particle filter is used to track the eye locations as well as the scales of the eye subjects. The set of particles that gives higher confidence is defined as the primary set and the other one is defined as the secondary set. The eye location is estimated by the primary particle filter, and whether the eye status is open or closed is also decided by the label of the primary particle filter. When a new frame comes, the secondary particle filter is reinitialized according to the estimates from the primary particle filter. We use autoregression models for describing the state transition and a classification-based model for measuring the observation. Tensor subspace analysis is used for feature extraction which is followed by a logistic regression model to give the posterior estimation. The performance is carefully evaluated from two aspects: the blink detection rate and the tracking accuracy. The blink detection rate is evaluated using videos from varying scenarios, and the tracking accuracy is given by comparing with the benchmark data obtained using the Vicon motion capturing system. The setup for obtaining benchmark data for tracking accuracy evaluation is presented and experimental results are shown. Extensive experimental evaluations validate the capability of the algorithm.
Ballistic target tracking algorithm based on improved particle filtering
NASA Astrophysics Data System (ADS)
Ning, Xiao-lei; Chen, Zhan-qi; Li, Xiao-yang
2015-10-01
Tracking ballistic re-entry target is a typical nonlinear filtering problem. In order to track the ballistic re-entry target in the nonlinear and non-Gaussian complex environment, a novel chaos map particle filter (CMPF) is used to estimate the target state. CMPF has better performance in application to estimate the state and parameter of nonlinear and non-Gassuian system. The Monte Carlo simulation results show that, this method can effectively solve particle degeneracy and particle impoverishment problem by improving the efficiency of particle sampling to obtain the better particles to part in estimation. Meanwhile CMPF can improve the state estimation precision and convergence velocity compared with EKF, UKF and the ordinary particle filter.
Method of concurrently filtering particles and collecting gases
Mitchell, Mark A; Meike, Annemarie; Anderson, Brian L
2015-04-28
A system for concurrently filtering particles and collecting gases. Materials are be added (e.g., via coating the ceramic substrate, use of loose powder(s), or other means) to a HEPA filter (ceramic, metal, or otherwise) to collect gases (e.g., radioactive gases such as iodine). The gases could be radioactive, hazardous, or valuable gases.
Resampling Algorithms for Particle Filters: A Computational Complexity Perspective
NASA Astrophysics Data System (ADS)
Bolić, Miodrag; Djurić, Petar M.; Hong, Sangjin
2004-12-01
Newly developed resampling algorithms for particle filters suitable for real-time implementation are described and their analysis is presented. The new algorithms reduce the complexity of both hardware and DSP realization through addressing common issues such as decreasing the number of operations and memory access. Moreover, the algorithms allow for use of higher sampling frequencies by overlapping in time the resampling step with the other particle filtering steps. Since resampling is not dependent on any particular application, the analysis is appropriate for all types of particle filters that use resampling. The performance of the algorithms is evaluated on particle filters applied to bearings-only tracking and joint detection and estimation in wireless communications. We have demonstrated that the proposed algorithms reduce the complexity without performance degradation.
Particle filter-based prognostics: Review, discussion and perspectives
NASA Astrophysics Data System (ADS)
Jouin, Marine; Gouriveau, Rafael; Hissel, Daniel; Péra, Marie-Cécile; Zerhouni, Noureddine
2016-05-01
Particle filters are of great concern in a large variety of engineering fields such as robotics, statistics or automatics. Recently, it has developed among Prognostics and Health Management (PHM) applications for diagnostics and prognostics. According to some authors, it has ever become a state-of-the-art technique for prognostics. Nowadays, around 50 papers dealing with prognostics based on particle filters can be found in the literature. However, no comprehensive review has been proposed on the subject until now. This paper aims at analyzing the way particle filters are used in that context. The development of the tool in the prognostics' field is discussed before entering the details of its practical use and implementation. Current issues are identified, analyzed and some solutions or work trails are proposed. All this aims at highlighting future perspectives as well as helping new users to start with particle filters in the goal of prognostics.
Geomagnetic field modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
1980-01-01
Five individual 5 year mini-batch geomagnetic models were generated and two computer programs were developed to process the models. The first program computes statistics (mean sigma, weighted sigma) on the changes in the first derivatives (linear terms) of the spherical harmonic coefficients between mini-batches. The program ran successfully. The statistics are intended for use in computing the state noise matrix required in the information filter. The second program is the information filter. Most subroutines used in the filter were tested, but the coefficient statistics must be analyzed before the filter is run.
A hybrid method for optimization of the adaptive Goldstein filter
NASA Astrophysics Data System (ADS)
Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue
2014-12-01
The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.
Optimal filter bandwidth for pulse oximetry
NASA Astrophysics Data System (ADS)
Stuban, Norbert; Niwayama, Masatsugu
2012-10-01
Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.
Optimal Gain Filter Design for Perceptual Acoustic Echo Suppressor
NASA Astrophysics Data System (ADS)
Kim, Kihyeon; Ko, Hanseok
This Letter proposes an optimal gain filter for the perceptual acoustic echo suppressor. We designed an optimally-modified log-spectral amplitude estimation algorithm for the gain filter in order to achieve robust suppression of echo and noise. A new parameter including information about interferences (echo and noise) of single-talk duration is statistically analyzed, and then the speech absence probability and the a posteriori SNR are judiciously estimated to determine the optimal solution. The experiments show that the proposed gain filter attains a significantly improved reduction of echo and noise with less speech distortion.
Entropy-based optimization of wavelet spatial filters.
Farina, Darino; Kamavuako, Ernest Nlandu; Wu, Jian; Naddeo, Francesco
2008-03-01
A new class of spatial filters for surface electromyographic (EMG) signal detection is proposed. These filters are based on the 2-D spatial wavelet decomposition of the surface EMG recorded with a grid of electrodes and inverse transformation after zeroing a subset of the transformation coefficients. The filter transfer function depends on the selected mother wavelet in the two spatial directions. Wavelet parameterization is proposed with the aim of signal-based optimization of the transfer function of the spatial filter. The optimization criterion was the minimization of the entropy of the time samples of the output signal. The optimized spatial filter is linear and space invariant. In simulated and experimental recordings, the optimized wavelet filter showed increased selectivity with respect to previously proposed filters. For example, in simulation, the ratio between the peak-to-peak amplitude of action potentials generated by motor units 20 degrees apart in the transversal direction was 8.58% (with monopolar recording), 2.47% (double differential), 2.59% (normal double differential), and 0.47% (optimized wavelet filter). In experimental recordings, the duration of the detected action potentials decreased from (mean +/- SD) 6.9 +/- 0.3 ms (monopolar recording), to 4.5 +/- 0.2 ms (normal double differential), 3.7 +/- 0.2 (double differential), and 3.0 +/- 0.1 ms (optimized wavelet filter). In conclusion, the new class of spatial filters with the proposed signal-based optimization of the transfer function allows better discrimination of individual motor unit activities in surface EMG recordings than it was previously possible. PMID:18334382
Forward-looking infrared 3D target tracking via combination of particle filter and SIFT
NASA Astrophysics Data System (ADS)
Li, Xing; Cao, Zhiguo; Yan, Ruicheng; Li, Tuo
2013-10-01
Aiming at the problem of tracking 3D target in forward-looking infrared (FLIR) image, this paper proposes a high-accuracy robust tracking algorithm based on SIFT and particle filter. The main contribution of this paper is the proposal of a new method of estimating the affine transformation matrix parameters based on Monte Carlo methods of particle filter. At first, we extract SIFT features on infrared image, and calculate the initial affine transformation matrix with optimal candidate key points. Then we take affine transformation parameters as particles, and use SIR (Sequential Importance Resampling) particle filter to estimate the best position, thus implementing our algorithm. The experiments demonstrate that our algorithm proves to be robust with high accuracy.
Cheng, Wen-Chang
2012-01-01
In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453
Particle-filter-based phase estimation in digital holographic interferometry.
Waghmare, Rahul G; Ram Sukumar, P; Subrahmanyam, G R K S; Singh, Rakesh Kumar; Mishra, Deepak
2016-03-01
In this paper, we propose a particle-filter-based technique for the analysis of a reconstructed interference field. The particle filter and its variants are well proven as tracking filters in non-Gaussian and nonlinear situations. We propose to apply the particle filter for direct estimation of phase and its derivatives from digital holographic interferometric fringes via a signal-tracking approach on a Taylor series expanded state model and a polar-to-Cartesian-conversion-based measurement model. Computation of sample weights through non-Gaussian likelihood forms the major contribution of the proposed particle-filter-based approach compared to the existing unscented-Kalman-filter-based approach. It is observed that the proposed approach is highly robust to noise and outperforms the state-of-the-art especially at very low signal-to-noise ratios (i.e., especially in the range of -5 to 20 dB). The proposed approach, to the best of our knowledge, is the only method available for phase estimation from severely noisy fringe patterns even when the underlying phase pattern is rapidly varying and has a larger dynamic range. Simulation results and experimental data demonstrate the fact that the proposed approach is a better choice for direct phase estimation. PMID:26974901
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
Analysis of Video-Based Microscopic Particle Trajectories Using Kalman Filtering
Wu, Pei-Hsun; Agarwal, Ashutosh; Hess, Henry; Khargonekar, Pramod P.; Tseng, Yiider
2010-01-01
Abstract The fidelity of the trajectories obtained from video-based particle tracking determines the success of a variety of biophysical techniques, including in situ single cell particle tracking and in vitro motility assays. However, the image acquisition process is complicated by system noise, which causes positioning error in the trajectories derived from image analysis. Here, we explore the possibility of reducing the positioning error by the application of a Kalman filter, a powerful algorithm to estimate the state of a linear dynamic system from noisy measurements. We show that the optimal Kalman filter parameters can be determined in an appropriate experimental setting, and that the Kalman filter can markedly reduce the positioning error while retaining the intrinsic fluctuations of the dynamic process. We believe the Kalman filter can potentially serve as a powerful tool to infer a trajectory of ultra-high fidelity from noisy images, revealing the details of dynamic cellular processes. PMID:20550894
Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design
Troncoso Romero, David Ernesto
2014-01-01
Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674
Generic Hardware Architectures for Sampling and Resampling in Particle Filters
NASA Astrophysics Data System (ADS)
Athalye, Akshay; Bolić, Miodrag; Hong, Sangjin; Djurić, Petar M.
2005-12-01
Particle filtering is a statistical signal processing methodology that has recently gained popularity in solving several problems in signal processing and communications. Particle filters (PFs) have been shown to outperform traditional filters in important practical scenarios. However their computational complexity and lack of dedicated hardware for real-time processing have adversely affected their use in real-time applications. In this paper, we present generic architectures for the implementation of the most commonly used PF, namely, the sampling importance resampling filter (SIRF). These provide a generic framework for the hardware realization of the SIRF applied to any model. The proposed architectures significantly reduce the memory requirement of the filter in hardware as compared to a straightforward implementation based on the traditional algorithm. We propose two architectures each based on a different resampling mechanism. Further, modifications of these architectures for acceleration of resampling process are presented. We evaluate these schemes based on resource usage and latency. The platform used for the evaluations is the Xilinx Virtex II pro FPGA. The architectures presented here have led to the development of the first hardware (FPGA) prototype for the particle filter applied to the bearings-only tracking problem.
Fish tracking by combining motion based segmentation and particle filtering
NASA Astrophysics Data System (ADS)
Bichot, E.; Mascarilla, L.; Courtellemont, P.
2006-01-01
In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.
Effects of particle size and velocity on burial depth of airborne particles in glass fiber filters
Higby, D.P.
1984-11-01
Air sampling for particulate radioactive material involves collecting airborne particles on a filter and then determining the amount of radioactivity collected per unit volume of air drawn through the filter. The amount of radioactivity collected is frequently determined by directly measuring the radiation emitted from the particles collected on the filter. Counting losses caused by the particle becoming buried in the filter matrix may cause concentrations of airborne particulate radioactive materials to be underestimated by as much as 50%. Furthermore, the dose calculation for inhaled radionuclides will also be affected. The present study was designed to evaluate the extent to which particle size and sampling velocity influence burial depth in glass-fiber filters. Aerosols of high-fired /sup 239/PuO/sub 2/ were collected at various sampling velocities on glass-fiber filters. The fraction of alpha counts lost due to burial was determined as the ratio of activity detected by direct alpha count to the quantity determined by photon spectrometry. The results show that burial of airborne particles collected on glass-fiber filters appears to be a weak function of sampling velocity and particle size. Counting losses ranged from 0 to 25%. A correction that assumes losses of 10 to 15% would ensure that the concentration of airborne alpha-emitting radionuclides would not be underestimated when glass-fiber filters are used. 32 references, 21 figures, 11 tables.
Multiple states and joint objects particle filter for eye tracking
NASA Astrophysics Data System (ADS)
Xiong, Jin; Jiang, Zhaohui; Liu, Junwei; Feng, Huanqing
2007-11-01
Recent works have proven that the particle filter is a powerful tracking technique for non-linear and non-Gaussian estimation problem. This paper presents an extension algorithm based on the color-based particle filter framework, which is applicable for complex eye tracking because of two main innovations. Firstly, an employment of an extra discrete-value variable and its associated transition probability matrix (TPM) makes it feasible in tracking multiple states of the eye during blinking. Secondly, the joint-object thought used in state vector eliminates the distraction from eyes to each other. The experimental results illustrate that the proposed algorithm is efficient for eye tracking.
Westinghouse hot gas particle filter system
Lippert, T.E.; Bruck, G.J.; Newby, R.A.; Bachovchin, D.M.; Debski, V.L.; Morehead, H.T.
1997-12-31
Integrated Gasification Combined Cycles (IGCC) and Pressurized Circulating Fluidized Bed Cycles (PCFB) are being developed and demonstrated for commercial power generation applications. Hot gas particulate filters (HGPF) are key components for the successful implementation of IGCC and PCFB in power generation gas turbine cycles. The objective is to develop and qualify through analysis and testing a practical HGPF system that meets the performance and operational requirements of PCFB and IGCC systems. This paper reports on the status of Westinghouse`s HGPF commercialization programs including: A quick summary of past gasification based HGPF test programs; A summary of the integrated HGPF operation at the American Electric Power, Tidd Pressurized Fluidized Bed Combustion (PFBC) Demonstration Project with approximately 6000 hours of HGPF testing completed; A summary of approximately 3200 hours of HGPF testing at the Foster Wheeler (FW) 10 MW{sub e} facility located in Karhula, Finland; A summary of over 700 hours of HGPF operation at the FW 2 MW{sub e} topping PCFB facility located in Livingston, New Jersey; A summary of the design of the HGPFs for the DOE/Southern Company Services, Power System Development Facility (PSDF) located in Wilsonville, Alabama; A summary of the design of the commercial-scale HGPF system for the Sierra Pacific, Pinon Pine IGCC Project; A review of completed testing and a summary of planned testing of Westinghouse HGPFs in Biomass IGCC applications; and A brief summary of the HGPF systems for the City of Lakeland, McIntosh Unit 4 PCFB Demonstration Project.
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation. PMID:24453865
Sequential Bearings-Only-Tracking Initiation with Particle Filtering Method
Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation. PMID:24453865
Nonlinear Statistical Signal Processing: A Particle Filtering Approach
Candy, J
2007-09-19
A introduction to particle filtering is discussed starting with an overview of Bayesian inference from batch to sequential processors. Once the evolving Bayesian paradigm is established, simulation-based methods using sampling theory and Monte Carlo realizations are discussed. Here the usual limitations of nonlinear approximations and non-gaussian processes prevalent in classical nonlinear processing algorithms (e.g. Kalman filters) are no longer a restriction to perform Bayesian inference. It is shown how the underlying hidden or state variables are easily assimilated into this Bayesian construct. Importance sampling methods are then discussed and shown how they can be extended to sequential solutions implemented using Markovian state-space models as a natural evolution. With this in mind, the idea of a particle filter, which is a discrete representation of a probability distribution, is developed and shown how it can be implemented using sequential importance sampling/resampling methods. Finally, an application is briefly discussed comparing the performance of the particle filter designs with classical nonlinear filter implementations.
A local particle filter for high dimensional geophysical systems
NASA Astrophysics Data System (ADS)
Penny, S. G.; Miyoshi, T.
2015-12-01
A local particle filter (LPF) is introduced that outperforms traditional ensemble Kalman filters in highly nonlinear/non-Gaussian scenarios, both in accuracy and computational cost. The standard Sampling Importance Resampling (SIR) particle filter is augmented with an observation-space localization approach, for which an independent analysis is computed locally at each gridpoint. The deterministic resampling approach of Kitagawa is adapted for application locally and combined with interpolation of the analysis weights to smooth the transition between neighboring points. Gaussian noise is applied with magnitude equal to the local analysis spread to prevent particle degeneracy while maintaining the estimate of the growing dynamical instabilities. The approach is validated against the Local Ensemble Transform Kalman Filter (LETKF) using the 40-variable Lorenz-96 model. The results show that: (1) the accuracy of LPF surpasses LETKF as the forecast length increases (thus increasing the degree of nonlinearity), (2) the cost of LPF is significantly lower than LETKF as the ensemble size increases, and (3) LPF prevents filter divergence experienced by LETKF in cases with non-Gaussian observation error distributions.
Bearings-Only Tracking of Manoeuvring Targets Using Particle Filters
NASA Astrophysics Data System (ADS)
Arulampalam, M. Sanjeev; Ristic, B.; Gordon, N.; Mansell, T.
2004-12-01
We investigate the problem of bearings-only tracking of manoeuvring targets using particle filters (PFs). Three different (PFs) are proposed for this problem which is formulated as a multiple model tracking problem in a jump Markov system (JMS) framework. The proposed filters are (i) multiple model PF (MMPF), (ii) auxiliary MMPF (AUX-MMPF), and (iii) jump Markov system PF (JMS-PF). The performance of these filters is compared with that of standard interacting multiple model (IMM)-based trackers such as IMM-EKF and IMM-UKF for three separate cases: (i) single-sensor case, (ii) multisensor case, and (iii) tracking with hard constraints. A conservative CRLB applicable for this problem is also derived and compared with the RMS error performance of the filters. The results confirm the superiority of the PFs for this difficult nonlinear tracking problem.
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
Localization using omnivision-based manifold particle filters
NASA Astrophysics Data System (ADS)
Wong, Adelia; Yousefhussien, Mohammed; Ptucha, Raymond
2015-01-01
Developing precise and low-cost spatial localization algorithms is an essential component for autonomous navigation systems. Data collection must be of sufficient detail to distinguish unique locations, yet coarse enough to enable real-time processing. Active proximity sensors such as sonar and rangefinders have been used for interior localization, but sonar sensors are generally coarse and rangefinders are generally expensive. Passive sensors such as video cameras are low cost and feature-rich, but suffer from high dimensions and excessive bandwidth. This paper presents a novel approach to indoor localization using a low cost video camera and spherical mirror. Omnidirectional captured images undergo normalization and unwarping to a canonical representation more suitable for processing. Training images along with indoor maps are fed into a semi-supervised linear extension of graph embedding manifold learning algorithm to learn a low dimensional surface which represents the interior of a building. The manifold surface descriptor is used as a semantic signature for particle filter localization. Test frames are conditioned, mapped to a low dimensional surface, and then localized via an adaptive particle filter algorithm. These particles are temporally filtered for the final localization estimate. The proposed method, termed omnivision-based manifold particle filters, reduces convergence lag and increases overall efficiency.
Model Adaptation for Prognostics in a Particle Filtering Framework
NASA Technical Reports Server (NTRS)
Saha, Bhaskar; Goebel, Kai Frank
2011-01-01
One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.
Fast, parallel implementation of particle filtering on the GPU architecture
NASA Astrophysics Data System (ADS)
Gelencsér-Horváth, Anna; Tornai, Gábor János; Horváth, András; Cserey, György
2013-12-01
In this paper, we introduce a modified cellular particle filter (CPF) which we mapped on a graphics processing unit (GPU) architecture. We developed this filter adaptation using a state-of-the art CPF technique. Mapping this filter realization on a highly parallel architecture entailed a shift in the logical representation of the particles. In this process, the original two-dimensional organization is reordered as a one-dimensional ring topology. We proposed a proof-of-concept measurement on two models with an NVIDIA Fermi architecture GPU. This design achieved a 411- μs kernel time per state and a 77-ms global running time for all states for 16,384 particles with a 256 neighbourhood size on a sequence of 24 states for a bearing-only tracking model. For a commonly used benchmark model at the same configuration, we achieved a 266- μs kernel time per state and a 124-ms global running time for all 100 states. Kernel time includes random number generation on the GPU with curand. These results attest to the effective and fast use of the particle filter in high-dimensional, real-time applications.
A Novel Particle Swarm Optimization Algorithm for Global Optimization
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
A Novel Particle Swarm Optimization Algorithm for Global Optimization.
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
Optimal filtering methods to structural damage estimation under ground excitation.
Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan
2013-01-01
This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869
Optimal Recursive Digital Filters for Active Bending Stabilization
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2013-01-01
In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.
Single-channel noise reduction using optimal rectangular filtering matrices.
Long, Tao; Chen, Jingdong; Benesty, Jacob; Zhang, Zhenxi
2013-02-01
This paper studies the problem of single-channel noise reduction in the time domain and presents a block-based approach where a vector of the desired speech signal is recovered by filtering a frame of the noisy signal with a rectangular filtering matrix. With this formulation, the noise reduction problem becomes one of estimating an optimal filtering matrix. To achieve such estimation, a method is introduced to decompose a frame of the clean speech signal into two orthogonal components: One correlated and the other uncorrelated with the current desired speech vector to be estimated. Different optimization cost functions are then formulated from which non-causal optimal filtering matrices are derived. The relationships among these optimal filtering matrices are discussed. In comparison with the classical sample-based technique that uses only forward prediction, the block-based method presented in this paper exploits both the forward and backward prediction as well as the temporal interpolation and, therefore, can improve the noise reduction performance by fully taking advantage of the speech property of self correlation. There is also a side advantage of this block-based method as compared to the sample-based technique, i.e., it is computationally more efficient and, as a result, more suitable for practical implementation. PMID:23363124
Boosting target tracking using particle filter with flow control
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Chan, Moses W.
2013-05-01
Target detection and tracking with passive infrared (IR) sensors can be challenging due to significant degradation and corruption of target signature by atmospheric transmission and clutter effects. This paper summarizes our efforts in phenomenology modeling of boosting targets with IR sensors, and developing algorithms for tracking targets in the presence of background clutter. On the phenomenology modeling side, the clutter images are generated using a high fidelity end-to-end simulation testbed. It models atmospheric transmission, structured clutter and solar reflections to create realistic background images. The dynamics and intensity of a boosting target are modeled and injected onto the background scene. Pixel level images are then generated with respect to the sensor characteristics. On the tracking analysis side, a particle filter for tracking targets in a sequence of clutter images is developed. The particle filter is augmented with a mechanism to control particle flow. Specifically, velocity feedback is used to constrain and control the particles. The performance of the developed "adaptive" particle filter is verified with tracking of a boosting target in the presence of clutter and occlusion.
Multiswarm Particle Swarm Optimization with Transfer of the Best Particle
Wei, Xiao-peng; Zhang, Jian-xia; Zhou, Dong-sheng; Zhang, Qiang
2015-01-01
We propose an improved algorithm, for a multiswarm particle swarm optimization with transfer of the best particle called BMPSO. In the proposed algorithm, we introduce parasitism into the standard particle swarm algorithm (PSO) in order to balance exploration and exploitation, as well as enhancing the capacity for global search to solve nonlinear optimization problems. First, the best particle guides other particles to prevent them from being trapped by local optima. We provide a detailed description of BMPSO. We also present a diversity analysis of the proposed BMPSO, which is explained based on the Sphere function. Finally, we tested the performance of the proposed algorithm with six standard test functions and an engineering problem. Compared with some other algorithms, the results showed that the proposed BMPSO performed better when applied to the test functions and the engineering problem. Furthermore, the proposed BMPSO can be applied to other nonlinear optimization problems. PMID:26345200
Distributed Particle Filter for Target Tracking: With Reduced Sensor Communications.
Ghirmai, Tadesse
2016-01-01
For efficient and accurate estimation of the location of objects, a network of sensors can be used to detect and track targets in a distributed manner. In nonlinear and/or non-Gaussian dynamic models, distributed particle filtering methods are commonly applied to develop target tracking algorithms. An important consideration in developing a distributed particle filtering algorithm in wireless sensor networks is reducing the size of data exchanged among the sensors because of power and bandwidth constraints. In this paper, we propose a distributed particle filtering algorithm with the objective of reducing the overhead data that is communicated among the sensors. In our algorithm, the sensors exchange information to collaboratively compute the global likelihood function that encompasses the contribution of the measurements towards building the global posterior density of the unknown location parameters. Each sensor, using its own measurement, computes its local likelihood function and approximates it using a Gaussian function. The sensors then propagate only the mean and the covariance of their approximated likelihood functions to other sensors, reducing the communication overhead. The global likelihood function is computed collaboratively from the parameters of the local likelihood functions using an average consensus filter or a forward-backward propagation information exchange strategy. PMID:27618057
Expected likelihood for tracking in clutter with particle filters
NASA Astrophysics Data System (ADS)
Marrs, Alan; Maskell, Simon; Bar-Shalom, Yaakov
2002-08-01
The standard approach to tracking a single target in clutter, using the Kalman filter or extended Kalman filter, is to gate the measurements using the predicted measurement covariance and then to update the predicted state using probabilistic data association. When tracking with a particle filter, an analog to the predicted measurement covariance is not directly available and could only be constructed as an approximation to the current particle cloud. A common alternative is to use a form of soft gating, based upon a Student's-t likelihood, that is motivated by the concept of score functions in classical statistical hypothesis testing. In this paper, we combine the score function and probabilistic data association approaches to develop a new method for tracking in clutter using a particle filter. This is done by deriving an expected likelihood from known measurement and clutter statistics. The performance of this new approach is assessed on a series of bearings-only tracking scenarios with uncertain sensor location and non-Gaussian clutter.
Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering
NASA Astrophysics Data System (ADS)
Bruno, Marcelo G. S.; Dias, Stiven S.
2014-12-01
We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.
Random set particle filter for bearings-only multitarget tracking
NASA Astrophysics Data System (ADS)
Vihola, Matti
2005-05-01
The random set approach to multitarget tracking is a theoretically sound framework that covers joint estimation of the number of targets and the state of the targets. This paper describes a particle filter implementation of the random set multitarget filter. The contribution of this paper to the random set tracking framework is the formulation of a measurement model where each sensor report is assumed to contain at most one measurement. The implemented filter was tested in synthetic bearings-only tracking scenarios containing up to two targets in the presence of false alarms and missed measurements. The estimated target state consisted of 2D position and velocity components. The filter was capable to track the targets fairly well despite of the missing measurements and the relatively high false alarm rates. In addition, the filter showed robustness against wrong parameter values of false alarm rates. The results that were obtained during the limited tests of the filter show that the random set framework has potential for challenging tracking situations. On the other hand, the computational burden of the described implementation is quite high and increases approximately linearly with respect to the expected number of targets.
Distributed soft-data-constrained multi-model particle filter.
Seifzadeh, Sepideh; Khaleghi, Bahador; Karray, Fakhri
2015-03-01
A distributed nonlinear estimation method based on soft-data-constrained multimodel particle filtering and applicable to a number of distributed state estimation problems is proposed. This method needs only local data exchange among neighboring sensor nodes and thus provides enhanced reliability, scalability, and ease of deployment. To make the multimodel particle filtering work in a distributed manner, a Gaussian approximation of the particle cloud obtained at each sensor node and a consensus propagation-based distributed data aggregation scheme are used to dynamically reweight the particles' weights. The proposed method can recover from failure situations and is robust to noise, since it keeps the same population of particles and uses the aggregated global Gaussian to infer constraints. The constraints are enforced by adjusting particles' weights and assigning a higher mass to those closer to the global estimate represented by the nodes in the entire sensor network after each communication step. Each sensor node experiences gradual change; i.e., if a noise occurs in the system, the node, its neighbors, and consequently the overall network are less affected than with other approaches, and thus recover faster. The efficiency of the proposed method is verified through extensive simulations for a target tracking system which can process both soft and hard data in sensor networks. PMID:24956539
Optimization of the development process for air sampling filter standards
NASA Astrophysics Data System (ADS)
Mena, RaJah Marie
Air monitoring is an important analysis technique in health physics. However, creating standards which can be used to calibrate detectors used in the analysis of the filters deployed for air monitoring can be challenging. The activity of a standard should be well understood, this includes understanding how the location within the filter affects the final surface emission rate. The purpose of this research is to determine the parameters which most affect uncertainty in an air filter standard and optimize these parameters such that calibrations made with them most accurately reflect the true activity contained inside. A deposition pattern was chosen from literature to provide the best approximation of uniform deposition of material across the filter. Samples sets were created varying the type of radionuclide, amount of activity (high activity at 6.4 -- 306 Bq/filter and one low activity 0.05 -- 6.2 Bq/filter, and filter type. For samples analyzed for gamma or beta contaminants, the standards created with this procedure were deemed sufficient. Additional work is needed to reduce errors to ensure this is a viable procedure especially for alpha contaminants.
Na-Faraday rotation filtering: The optimal point
Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja
2014-01-01
Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251
Na-Faraday rotation filtering: the optimal point.
Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja
2014-01-01
Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251
Optimal Correlation Filters for Images with Signal-Dependent Noise
NASA Technical Reports Server (NTRS)
Downie, John D.; Walkup, John F.
1994-01-01
We address the design of optimal correlation filters for pattern detection and recognition in the presence of signal-dependent image noise sources. The particular examples considered are film-grain noise and speckle. Two basic approaches are investigated: (1) deriving the optimal matched filters for the signal-dependent noise models and comparing their performances with those derived for traditional signal-independent noise models and (2) first nonlinearly transforming the signal-dependent noise to signal-independent noise followed by the use of a classical filter matched to the transformed signal. We present both theoretical and computer simulation results that demonstrate the generally superior performance of the second approach in terms of the correlation peak signal-to-noise ratio.
Optimization of narrow optical spectral filters for nonparallel monochromatic radiation.
Linder, S L
1967-07-01
This paper delineates a method of determining the design criteria for narrow optical passband filters used in the reception of nonparallel modulated monochromatic radiation. The analysis results in straightforward mathematical expressions for calculating the filter width and design center wavelength which maximize the signal-to-noise ratio. Two cases are considered: (a) the filter is designed to have a maximum transmission (for normal incidence) at the incident wavelength, but with the spectral width optimized, and (b) both the design wavelength and the spectral width are optimized. It is shown that the voltage signal-to-noise ratio for case (b) is 2((1/2)) that of case (a). Numerical examples are calculated. PMID:20062163
Composite Particle Swarm Optimizer With Historical Memory for Function Optimization.
Li, Jie; Zhang, JunQi; Jiang, ChangJun; Zhou, MengChu
2015-10-01
Particle swarm optimization (PSO) algorithm is a population-based stochastic optimization technique. It is characterized by the collaborative search in which each particle is attracted toward the global best position (gbest) in the swarm and its own best position (pbest). However, all of particles' historical promising pbests in PSO are lost except their current pbests. In order to solve this problem, this paper proposes a novel composite PSO algorithm, called historical memory-based PSO (HMPSO), which uses an estimation of distribution algorithm to estimate and preserve the distribution information of particles' historical promising pbests. Each particle has three candidate positions, which are generated from the historical memory, particles' current pbests, and the swarm's gbest. Then the best candidate position is adopted. Experiments on 28 CEC2013 benchmark functions demonstrate the superiority of HMPSO over other algorithms. PMID:26390177
Opdic (optimized Peak, Distortion and Clutter) Detection Filter.
NASA Astrophysics Data System (ADS)
House, Gregory Philip
1995-01-01
Detection is considered. This involves determining regions of interest (ROIs) in a scene: the locations of multiple object classes in a scene in clutter when object distortions and contrast differences are present. High probability of detection P_{D} is essential and low P_{FA } is desirable since subsequent stages in the full system will only decrease P_{FA } and cannot increase P_{D }. Low resolution blob objects and objects with more internal detail are considered with both 3-D aspect view and depression angle distortions present. Extensive tests were conducted on 56 scenes with object classes not present in the training set. A modified MINACE (Minimum Noise and Correlation Energy) distortion-invariant filter was used. This minimizes correlation plane energy due to distortions and clutter while satisfying correlation peak constraint values for various object-aspect views. The filter was modified with a new object model (to give predictable output peak values) and a new correlated noise clutter model; a white Gaussian noise model of distortion was used; and a new techniques to increase the number of training set images (N _{T}) included in the filter were developed. Excellent results were obtained. However, the correlation plane distortion and clutter energy functions were found to become worse as N_{T } was increased and no rigorous method exists to select the best N_{T} (when to stop filter synthesis). A new OPDIC (Optimized Peak, Distortion, and Clutter) filter was thus devised. This filter retained the new object, clutter and distortion models noted. It minimizes the variance of the correlation peak values for all training set images (not just the N_{T} images). As N _{T} increases, the peak variance and the objective functions (correlation plane distortion and clutter energy) are all minimized. Thus, this new filter optimizes the desired functions and provides an easy way to stop filter synthesis (when the objective function is minimized). Tests show
Particle swarm optimization for complex nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos
2016-06-01
This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486
Measurement of particle sulfate from micro-aethalometer filters
NASA Astrophysics Data System (ADS)
Wang, Qingqing; Yang, Fumo; Wei, Lianfang; Zheng, Guangjie; Fan, Zhongjie; Rajagopalan, Sanjay; Brook, Robert D.; Duan, Fengkui; He, Kebin; Sun, Yele; Brook, Jeffrey R.
2014-10-01
The micro-aethalometer (AE51) was designed for high time resolution black carbon (BC) measurements and the process collects particles on a filter inside the instrument. Here we examine the potential for saving these filters for subsequent sulfate (SO42-) measurement. For this purpose, a series lab and field blanks were analyzed to characterize blank levels and variability and then collocated 24-h aerosol sampling was conducted in Beijing with the AE51 and a dual-channel filterpack sampler that collects fine particles (PM2.5). AE51 filters and the filters from the filterpacks sampled for 24 h were extracted with ultrapure water and then analyzed by Ion Chromatography (IC) to determine integrated SO42- concentration. Blank corrections were essential and the estimated detection limit for 24 h AE51 sampling of SO42- was estimated to be 1.4 μg/m3. The SO42- measured from the AE51 based upon blank corrections using batch-average field blank SO42- values was found to be in reasonable agreement with the filterpack results (R2 > 0.87, slope = 1.02) indicating that it is possible to determine both BC and SO42- concentrations using the AE51 in Beijing. This result suggests that future comparison of the relative health impacts of BC and SO42- could be possible when the AE51 is used for personal exposure measurement.
Marginalized Particle Filter for Blind Signal Detection with Analog Imperfections
NASA Astrophysics Data System (ADS)
Yoshida, Yuki; Hayashi, Kazunori; Sakai, Hideaki; Bocquet, Wladimir
Recently, the marginalized particle filter (MPF) has been applied to blind symbol detection problems over selective fading channels. The MPF can ease the computational burden of the standard particle filter (PF) while offering better estimates compared with the standard PF. In this paper, we investigate the application of the blind MPF detector to more realistic situations where the systems suffer from analog imperfections which are non-linear signal distortion due to the inaccurate analog circuits in wireless devices. By reformulating the system model using the widely linear representation and employing the auxiliary variable resampling (AVR) technique for estimation of the imperfections, the blind MPF detector is successfully modified to cope with the analog imperfections. The effectiveness of the proposed MPF detector is demonstrated via computer simulations.
Degeneracy, frequency response and filtering in IMRT optimization
NASA Astrophysics Data System (ADS)
Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D.; Promberger, Claus
2004-07-01
This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques.
Linear multistep methods, particle filtering and sequential Monte Carlo
NASA Astrophysics Data System (ADS)
Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki
2013-08-01
Numerical integration is the main bottleneck in particle filter methodologies for dynamic inverse problems to estimate model parameters, initial values, and non-observable components of an ordinary differential equation (ODE) system from partial, noisy observations, because proposals may result in stiff systems which first slow down or paralyze the time integration process, then end up being discarded. The immediate advantage of formulating the problem in a sequential manner is that the integration is carried out on shorter intervals, thus reducing the risk of long integration processes followed by rejections. We propose to solve the ODE systems within a particle filter framework with higher order numerical integrators which can handle stiffness and to base the choice of the variance of the innovation on estimates of the discretization errors. The application of linear multistep methods to particle filters gives a handle on the stability and accuracy of the propagation, and linking the innovation variance to the accuracy estimate helps keep the variance of the estimate as low as possible. The effectiveness of the methodology is demonstrated with a simple ODE system similar to those arising in biochemical applications.
Optimal color image restoration: Wiener filter and quaternion Fourier transform
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; Agaian, Sos S.
2015-03-01
In this paper, we consider the model of quaternion signal degradation when the signal is convoluted and an additive noise is added. The classical model of such a model leads to the solution of the optimal Wiener filter, where the optimality with respect to the mean square error. The characteristic of this filter can be found in the frequency domain by using the Fourier transform. For quaternion signals, the inverse problem is complicated by the fact that the quaternion arithmetic is not commutative. The quaternion Fourier transform does not map the convolution to the operation of multiplication. In this paper, we analyze the linear model of the signal and image degradation with an additive independent noise and the optimal filtration of the signal and images in the frequency domain and in the quaternion space.
Optimized Beam Sculpting with Generalized Fringe-rate Filters
NASA Astrophysics Data System (ADS)
Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina
2016-03-01
We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Gebremichael, M.; Hopson, T. M.; Wojick, R.
2011-12-01
We present results of data assimilation of ground discharge observation and remotely sensed soil moisture observations into Sacramento Soil Moisture Accounting (SACSMA) model in a small watershed (1593 km2) in Minnesota, the Unites States. Specifically, we perform assimilation experiments with Ensemble Kalman Filter (EnKF) and Particle Filter (PF) in order to improve streamflow forecast accuracy at six hourly time step. The EnKF updates the soil moisture states in the SACSMA from the relative errors of the model and observations, while the PF adjust the weights of the state ensemble members based on the likelihood of the forecast. Results of the improvements of each filter over the reference model (without data assimilation) will be presented. Finally, the EnKF and PF are coupled together to further improve the streamflow forecast accuracy.
Acoustic Radiation Optimization Using the Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
Jeon, Jin-Young; Okuma, Masaaki
The present paper describes a fundamental study on structural bending design to reduce noise using a new evolutionary population-based heuristic algorithm called the particle swarm optimization algorithm (PSOA). The particle swarm optimization algorithm is a parallel evolutionary computation technique proposed by Kennedy and Eberhart in 1995. This algorithm is based on the social behavior models for bird flocking, fish schooling and other models investigated by zoologists. Optimal structural design problems to reduce noise are highly nonlinear, so that most conventional methods are difficult to apply. The present paper investigates the applicability of PSOA to such problems. Optimal bending design of a vibrating plate using PSOA is performed in order to minimize noise radiation. PSOA can be effectively applied to such nonlinear acoustic radiation optimization.
Optimal matched filter design for ultrasonic NDE of coarse grain materials
NASA Astrophysics Data System (ADS)
Li, Minghui; Hayward, Gordon
2016-02-01
Coarse grain materials are widely used in a variety of key industrial sectors like energy, oil and gas, and aerospace due to their attractive properties. However, when these materials are inspected using ultrasound, the flaw echoes are usually contaminated by high-level, correlated grain noise originating from the material microstructures, which is time-invariant and demonstrates similar spectral characteristics as flaw signals. As a result, the reliable inspection of such materials is highly challenging. In this paper, we present a method for reliable ultrasonic non-destructive evaluation (NDE) of coarse grain materials using matched filters, where the filter is designed to approximate and match the unknown defect echoes, and a particle swarm optimization (PSO) paradigm is employed to search for the optimal parameters in the filter response with an objective to maximise the output signal-to-noise ratio (SNR). Experiments with a 128-element 5MHz transducer array on mild steel and INCONEL Alloy 617 samples are conducted, and the results confirm that the SNR of the images is improved by about 10-20 dB if the optimized matched filter is applied to all the A-scan waveforms prior to image formation. Furthermore, the matched filter can be implemented in real-time with low extra computational cost.
Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement
NASA Astrophysics Data System (ADS)
Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.
In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.
Selectively-informed particle swarm optimization.
Gao, Yang; Du, Wenbo; Yan, Gang
2015-01-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315
Selectively-informed particle swarm optimization
Gao, Yang; Du, Wenbo; Yan, Gang
2015-01-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315
Selectively-informed particle swarm optimization
NASA Astrophysics Data System (ADS)
Gao, Yang; Du, Wenbo; Yan, Gang
2015-03-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors.
Fourier Spectral Filter Array for Optimal Multispectral Imaging.
Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo
2016-04-01
Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data. PMID:26849867
System-level optimization of baseband filters for communication applications
NASA Astrophysics Data System (ADS)
Delgado-Restituto, Manuel; Fernandez-Bootello, Juan F.; Rodriguez-Vazquez, Angel
2003-04-01
In this paper, we present a design approach for the high-level synthesis of programmable continuous-time Gm-C and active-RC filters with optimum trade-off among dynamic range, distortion products generation, area consumption and power dissipation, thus meeting the needs of more demanding baseband filter realizations. Further, the proposed technique guarantees that under all programming configurations, transconductors (in Gm-C filters) and resistors (in active-RC filters) as well as capacitors, are related by integer ratios in order to reduce the sensitivity to mismatch of the monolithic implementation. In order to solve the aforementioned trade-off, the filter must be properly scaled at each configuration. It means that filter node impedances must be conveniently altered so that the noise contribution of each node to the filter output be as low as possible, while avoiding that peak amplitudes at such nodes be so high as to drive active circuits into saturation. Additionally, in order to not degrade the distortion performance of the filter (in particular, if it is implemented using Gm-C techniques) node impedances can not be scaled independently from each other but restrictions must be imposed according to the principle of nonlinear cancellation. Altogether, the high-level synthesis can be seen as a constrained optimization problem where some of the variables, namely, the ratios among similar components, are restricted to discrete values. The proposed approach to accomplish optimum filter scaling under all programming configurations, relies on matrix methods for network representation, which allows an easy estimation of performance features such as dynamic range and power dissipation, as well as other network properties such as sensitivity to parameter variations and non-ideal effects of integrators blocks; and the use of a simulated annealing algorithm to explore the design space defined by the transfer and group delay specifications. It must be noted that such
Auxiliary particle filter-model predictive control of the vacuum arc remelting process
NASA Astrophysics Data System (ADS)
Lopez, F.; Beaman, J.; Williamson, R.
2016-07-01
Solidification control is required for the suppression of segregation defects in vacuum arc remelting of superalloys. In recent years, process controllers for the VAR process have been proposed based on linear models, which are known to be inaccurate in highly-dynamic conditions, e.g. start-up, hot-top and melt rate perturbations. A novel controller is proposed using auxiliary particle filter-model predictive control based on a nonlinear stochastic model. The auxiliary particle filter approximates the probability of the state, which is fed to a model predictive controller that returns an optimal control signal. For simplicity, the estimation and control problems are solved using Sequential Monte Carlo (SMC) methods. The validity of this approach is verified for a 430 mm (17 in) diameter Alloy 718 electrode melted into a 510 mm (20 in) diameter ingot. Simulation shows a more accurate and smoother performance than the one obtained with an earlier version of the controller.
Pixelated source optimization for optical lithography via particle swarm optimization
NASA Astrophysics Data System (ADS)
Wang, Lei; Li, Sikun; Wang, Xiangzhao; Yan, Guanyong; Yang, Chaoxing
2016-01-01
Source optimization is one of the key techniques for achieving higher resolution without increasing the complexity of mask design. An efficient source optimization approach is proposed on the basis of particle swarm optimization. The pixelated sources are encoded into particles, which are evaluated by using the pattern error as the fitness function. Afterward, the optimization is implemented by updating the velocities and positions of these particles. This approach is demonstrated using three mask patterns, including a periodic array of contact holes, a vertical line/space design, and a complicated pattern. The pattern errors are reduced by 69.6%, 51.5%, and 40.3%, respectively. Compared with the source optimization approach via genetic algorithm, the proposed approach leads to faster convergence while improving the image quality at the same time. Compared with the source optimization approach via gradient descent method, the proposed approach does not need the calculation of gradients, and it has a strong adaptation to various lithographic models, fitness functions, and resist models. The robustness of the proposed approach to initial sources is also verified.
Ridge filter design for a particle therapy line
NASA Astrophysics Data System (ADS)
Kim, Chang Hyeuk; Han, Garam; Lee, Hwa-Ryun; Kim, Hyunyong; Jang, Hong Suk; Kim, Jeong Hwan; Park, Dong Wook; Jang, Sea Duk; Hwang, Won Taek; Kim, Geun-Beom; Yang, Tae-Keun
2014-05-01
The beam irradiation system for particle therapy can use a passive or an active beam irradiation method. In the case of an active beam irradiation, using a ridge filter would be appropriate to generate a spread-out Bragg peak (SOBP) through a large scanning area. For this study, a ridge filter was designed as an energy modulation device for a prototype active scanning system at MC-50 in Korea Institute of Radiological And Medical Science (KIRAMS). The ridge filter was designed to create a 10 mm of SOBP for a 45-MeV proton beam. To reduce the distal penumbra and the initial dose, [DM] determined the weighting factor for Bragg Peak by applying an in-house iteration code and the Minuit Fit package of Root. A single ridge bar shape and its corresponding thickness were obtained through 21 weighting factors. Also, a ridge filter was fabricated to cover a large scanning area (300 × 300 mm2) by Polymethyl Methacrylate (PMMA). The fabricated ridge filter was tested at the prototype active beamline of MC-50. The SOBP and the incident beam distribution were obtained by using HD-810 GaF chromatic film placed at a right triangle to the PMMA block. The depth dose profile for the SOBP can be obtained precisely by using the flat field correction and measuring the 2-dimensional distribution of the incoming beam. After the flat field correction is used, the experimental results show that the SOBP region matches with design requirement well, with 0.62% uniformity.
Independent motion detection with a rival penalized adaptive particle filter
NASA Astrophysics Data System (ADS)
Becker, Stefan; Hübner, Wolfgang; Arens, Michael
2014-10-01
Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic
Identification of Backlash Type Hysteretic Systems Based on Particle Filter
NASA Astrophysics Data System (ADS)
Masuda, Tetsuya; Sugie, Toshiharu
This paper considers the system identification problem for hysteresis systems. This problem plays an important role in achieving better control performance, because many actuators have hysteresis property. This paper proposes a method to identify linear dynamical systems having input hysteresis property of backlash type. The method is based on particle filter, which is known for its applicability to a wide class of nonlinear systems. Numerical examples are given to demonstrate the effectiveness of the proposed method in detail. Furthermore, experimental validation is performed for a DC servo motor system.
Particle filter based on thermophoretic deposition from natural convection flow
Sasse, A.G.B.M.; Nazaroff, W.W. ); Gadgil, A.J. )
1994-04-01
We present an analysis of particle migration in a natural convection flow between parallel plates and within the annulus of concentric tubes. The flow channel is vertically oriented with one surface maintained at a higher temperature than the other. Particle migration is dominated by advection in the vertical direction and thermophoresis in the horizontal direction. From scale analysis it is demonstrated that particles are completely removed from air flowing through the channel if its length exceeds L[sub c] = (b[sup 4]g/24K[nu][sup 2]), where b is the width of the channel, g is the acceleration of gravity, K is a thermophoretic coefficient of order 0.5, and [nu] is the kinematic viscosity of air. Precise predictions of particle removal efficiency as a function of system parameters are obtained by numerical solution of the governing equations. Based on the model results, it appears feasible to develop a practical filter for removing smoke particles from a smoldering cigarette in an ashtray by using natural convection in combination with thermophoresis. 22 refs., 8 figs., 1 tab.
Tracking low SNR targets using particle filter with flow control
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2014-06-01
In this work we study the problem of detecting and tracking challenging targets that exhibit low signal-to-noise ratios (SNR). We have developed a particle filter-based track-before-detect (TBD) algorithm for tracking such dim targets. The approach incorporates the most recent state estimates to control the particle flow accounting for target dynamics. The flow control enables accumulation of signal information over time to compensate for target motion. The performance of this approach is evaluated using a sensitivity analysis based on varying target speed and SNR values. This analysis was conducted using high-fidelity sensor and target modeling in realistic scenarios. Our results show that the proposed TBD algorithm is capable of tracking targets in cluttered images with SNR values much less than one.
Loss of Fine Particle Ammonium from Denuded Nylon Filters
Yu, Xiao-Ying; Lee, Taehyoung; Ayres, Benjamin; Kreidenweis, Sonia M.; Malm, William C.; Collett, Jeffrey L.
2006-08-01
Ammonium is an important constituent of fine particulate mass in the atmosphere, but can be difficult to quantify due to possible sampling artifacts. Losses of semivolatile species such as NH4NO3 can be particularly problematic. In order to evaluate ammonium losses from aerosol particles collected on filters, a series of field experiments was conducted using denuded nylon and Teflon filters at Bondville, Illinois (February 2003), San Gorgonio, California (April 2003 and July 2004), Grand Canyon National Park, Arizona (May, 2003), Brigantine, New Jersey (November 2003), and Great Smoky Mountains National Park (NP), Tennessee (July–August 2004). Samples were collected over 24-hr periods. Losses from denuded nylon filters ranged from 10% (monthly average) in Bondville, Illinois to 28% in San Gorgonio, California in summer. Losses on individual sample days ranged from 1% to 65%. Losses tended to increase with increasing diurnal temperature and relative humidity changes and with the fraction of ambient total N(--III) (particulate NH4+ plus gaseous NH3) present as gaseous NH3. The amount of ammonium lost at most sites could be explained by the amount of NH4NO3 present in the sampled aerosol. Ammonium losses at Great Smoky Mountains NP, however, significantly exceeded the amount of NH4NO3 collected. Ammoniated organic salts are suggested as additional important contributors to observed ammonium loss at this location.
Nonlinear EEG Decoding Based on a Particle Filter Model
Hong, Jun
2014-01-01
While the world is stepping into the aging society, rehabilitation robots play a more and more important role in terms of both rehabilitation treatment and nursing of the patients with neurological diseases. Benefiting from the abundant contents of movement information, electroencephalography (EEG) has become a promising information source for rehabilitation robots control. Although the multiple linear regression model was used as the decoding model of EEG signals in some researches, it has been considered that it cannot reflect the nonlinear components of EEG signals. In order to overcome this shortcoming, we propose a nonlinear decoding model, the particle filter model. Two- and three-dimensional decoding experiments were performed to test the validity of this model. In decoding accuracy, the results are comparable to those of the multiple linear regression model and previous EEG studies. In addition, the particle filter model uses less training data and more frequency information than the multiple linear regression model, which shows the potential of nonlinear decoding models. Overall, the findings hold promise for the furtherance of EEG-based rehabilitation robots. PMID:24949420
NASA Astrophysics Data System (ADS)
Lin, Xiangdong; Kirubarajan, Thiagalingam; Bar-Shalom, Yaakov; Maskell, Simon
2002-08-01
In this paper we consider a nonlinear bearing-only target tracking problem using three different methods and compare their performances. The study is motivated by a ground surveillance problem where a target is tracked from an airborne sensor at an approximately known altitude using depression angle observations. Two nonlinear suboptimal estimators, namely, the extended Kalman Filter (EKF) and the pseudomeasurement tracking filter are applied in a 2-D bearing-only tracking scenario. The EKF is based on the linearization of the nonlinearities in the dynamic and/or the measurement equations. The pseudomeasurement tracking filter manipulates the original nonlinear measurement algebraically to obtain the linear-like structures measurement. Finally, the particle filter, which is a Monte Carlo integration based optimal nonlinear filter and has been presented in the literature as a better alternative to linearization via EKF, is used on the same problem. The performances of these three different techniques in terms of accuracy and computational load are presented in this paper. The results demonstrate the limitations of these algorithms on this deceptively simple tracking problem.
NASA Astrophysics Data System (ADS)
Wang, Dong; Sun, Shilong; Tse, Peter W.
2015-02-01
A general sequential Monte Carlo method, particularly a general particle filter, attracts much attention in prognostics recently because it is able to on-line estimate posterior probability density functions of the state functions used in a state space model without making restrictive assumptions. In this paper, the general particle filter is introduced to optimize a wavelet filter for extracting bearing fault features. The major innovation of this paper is that a joint posterior probability density function of wavelet parameters is represented by a set of random particles with their associated weights, which is seldom reported. Once the joint posterior probability density function of wavelet parameters is derived, the approximately optimal center frequency and bandwidth can be determined and be used to perform an optimal wavelet filtering for extracting bearing fault features. Two case studies are investigated to illustrate the effectiveness of the proposed method. The results show that the proposed method provides a Bayesian approach to extract bearing fault features. Additionally, the proposed method can be generalized by using different wavelet functions and metrics and be applied more widely to any other situation in which the optimal wavelet filtering is required.
The new approach for infrared target tracking based on the particle filter algorithm
NASA Astrophysics Data System (ADS)
Sun, Hang; Han, Hong-xia
2011-08-01
Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy
Filtering of windborne particles by a natural windbreak
NASA Astrophysics Data System (ADS)
Bouvet, Thomas; Loubet, Benjamin; Wilson, John D.; Tuzet, Andree
2007-06-01
New measurements of the transport and deposition of artificial heavy particles (glass beads) to a thick ‘shelterbelt’ of maize (width/height ratio W/ H ≈ 1.6) are used to test numerical simulations with a Lagrangian stochastic trajectory model driven by the flow field from a RANS (Reynolds-averaged, Navier-Stokes) wind and turbulence model. We illustrate the ambiguity inherent in applying to such a thick windbreak the pre-existing (Raupach et al. 2001; Atmos. Environ. 35, 3373-3383) ‘thin windbreak’ theory of particle filtering by vegetation, and show that the present description, while much more laborious, provides a reasonably satisfactory account of what was measured. A sizeable fraction of the particle flux entering the shelterbelt across its upstream face is lifted out of its volume by the mean updraft induced by the deceleration of the flow in the near-upstream and entry region, and these particles thereby escape deposition in the windbreak.
Lagrange Interpolation Learning Particle Swarm Optimization.
Kai, Zhang; Jinchun, Song; Ke, Ni; Song, Li
2016-01-01
In recent years, comprehensive learning particle swarm optimization (CLPSO) has attracted the attention of many scholars for using in solving multimodal problems, as it is excellent in preserving the particles' diversity and thus preventing premature convergence. However, CLPSO exhibits low solution accuracy. Aiming to address this issue, we proposed a novel algorithm called LILPSO. First, this algorithm introduced a Lagrange interpolation method to perform a local search for the global best point (gbest). Second, to gain a better exemplar, one gbest, another two particle's historical best points (pbest) are chosen to perform Lagrange interpolation, then to gain a new exemplar, which replaces the CLPSO's comparison method. The numerical experiments conducted on various functions demonstrate the superiority of this algorithm, and the two methods are proven to be efficient for accelerating the convergence without leading the particle to premature convergence. PMID:27123982
Unit Commitment by Adaptive Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Saber, Ahmed Yousuf; Senjyu, Tomonobu; Miyagi, Tsukasa; Urasaki, Naomitsu; Funabashi, Toshihisa
This paper presents an Adaptive Particle Swarm Optimization (APSO) for Unit Commitment (UC) problem. APSO reliably and accurately tracks a continuously changing solution. By analyzing the social model of standard PSO for the UC problem of variable size and load demand, adaptive criteria are applied on PSO parameters and the global best particle (knowledge) based on the diversity of fitness. In this proposed method, PSO parameters are automatically adjusted using Gaussian modification. To increase the knowledge, the global best particle is updated instead of a fixed one in each generation. To avoid the method to be frozen, idle particles are reset. The real velocity is digitized (0/1) by a logistic function for binary UC. Finally, the benchmark data and methods are used to show the effectiveness of the proposed method.
Solving constrained optimization problems with hybrid particle swarm optimization
NASA Astrophysics Data System (ADS)
Zahara, Erwie; Hu, Chia-Hsin
2008-11-01
Constrained optimization problems (COPs) are very important in that they frequently appear in the real world. A COP, in which both the function and constraints may be nonlinear, consists of the optimization of a function subject to constraints. Constraint handling is one of the major concerns when solving COPs with particle swarm optimization (PSO) combined with the Nelder-Mead simplex search method (NM-PSO). This article proposes embedded constraint handling methods, which include the gradient repair method and constraint fitness priority-based ranking method, as a special operator in NM-PSO for dealing with constraints. Experiments using 13 benchmark problems are explained and the NM-PSO results are compared with the best known solutions reported in the literature. Comparison with three different meta-heuristics demonstrates that NM-PSO with the embedded constraint operator is extremely effective and efficient at locating optimal solutions.
Multi-prediction particle filter for efficient parallelized implementation
NASA Astrophysics Data System (ADS)
Chu, Chun-Yuan; Chao, Chih-Hao; Chao, Min-An; Wu, An-Yeu Andy
2011-12-01
Particle filter (PF) is an emerging signal processing methodology, which can effectively deal with nonlinear and non-Gaussian signals by a sample-based approximation of the state probability density function. The particle generation of the PF is a data-independent procedure and can be implemented in parallel. However, the resampling procedure in the PF is a sequential task in natural and difficult to be parallelized. Based on the Amdahl's law, the sequential portion of a task limits the maximum speed-up of the parallelized implementation. Moreover, large particle number is usually required to obtain an accurate estimation, and the complexity of the resampling procedure is highly related to the number of particles. In this article, we propose a multi-prediction (MP) framework with two selection approaches. The proposed MP framework can reduce the required particle number for target estimation accuracy, and the sequential operation of the resampling can be reduced. Besides, the overhead of the MP framework can be easily compensated by parallel implementation. The proposed MP-PF alleviates the global sequential operation by increasing the local parallel computation. In addition, the MP-PF is very suitable for multi-core graphics processing unit (GPU) platform, which is a popular parallel processing architecture. We give prototypical implementations of the MP-PFs on multi-core GPU platform. For the classic bearing-only tracking experiments, the proposed MP-PF can be 25.1 and 15.3 times faster than the sequential importance resampling-PF with 10,000 and 20,000 particles, respectively. Hence, the proposed MP-PF can enhance the efficiency of the parallelization.
Multi-strategy coevolving aging particle optimization.
Iacca, Giovanni; Caraffini, Fabio; Neri, Ferrante
2014-02-01
We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer. PMID:24344695
Achieving sub-nanometre particle mapping with energy-filtered TEM.
Lozano-Perez, S; de Castro Bernal, V; Nicholls, R J
2009-09-01
A combination of state-of-the-art instrumentation and optimized data processing has enabled for the first time the chemical mapping of sub-nanometre particles using energy-filtered transmission electron microscopy (EFTEM). Multivariate statistical analysis (MSA) generated reconstructed datasets where the signal from particles smaller than 1 nm in diameter was successfully isolated from the original noisy background. The technique has been applied to the characterization of oxide dispersion strengthened (ODS) reduced activation FeCr alloys, due to their relevance as structural materials for future fusion reactors. Results revealed that most nanometer-sized particles had a core-shell structure, with an Yttrium-Chromium-Oxygen-rich core and a nano-scaled Chromium-Oxygen-rich shell. This segregation to the nanoparticles caused a decrease of the Chromium dissolved in the matrix, compromising the corrosion resistance of the alloy. PMID:19505762
Analysis of single particle diffusion with transient binding using particle filtering.
Bernstein, Jason; Fricks, John
2016-07-21
Diffusion with transient binding occurs in a variety of biophysical processes, including movement of transmembrane proteins, T cell adhesion, and caging in colloidal fluids. We model diffusion with transient binding as a Brownian particle undergoing Markovian switching between free diffusion when unbound and diffusion in a quadratic potential centered around a binding site when bound. Assuming the binding site is the last position of the particle in the unbound state and Gaussian observational error obscures the true position of the particle, we use particle filtering to predict when the particle is bound and to locate the binding sites. Maximum likelihood estimators of diffusion coefficients, state transition probabilities, and the spring constant in the bound state are computed with a stochastic Expectation-Maximization (EM) algorithm. PMID:27107737
Quantum demolition filtering and optimal control of unstable systems.
Belavkin, V P
2012-11-28
A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one. PMID:23091216
Robust Tracking Using Particle Filter with a Hybrid Feature
NASA Astrophysics Data System (ADS)
Zhao, Xinyue; Satoh, Yutaka; Takauji, Hidenori; Kaneko, Shun'ichi
This paper presents a novel method for robust object tracking in video sequences using a hybrid feature-based observation model in a particle filtering framework. An ideal observation model should have both high ability to accurately distinguish objects from the background and high reliability to identify the detected objects. Traditional features are better at solving the former problem but weak in solving the latter one. To overcome that, we adopt a robust and dynamic feature called Grayscale Arranging Pairs (GAP), which has high discriminative ability even under conditions of severe illumination variation and dynamic background elements. Together with the GAP feature, we also adopt the color histogram feature in order to take advantage of traditional features in resolving the first problem. At the same time, an efficient and simple integration method is used to combine the GAP feature with color information. Comparative experiments demonstrate that object tracking with our integrated features performs well even when objects go across complex backgrounds.
Geoacoustic and source tracking using particle filtering: experimental results.
Yardim, Caglar; Gerstoft, Peter; Hodgkiss, William S
2010-07-01
A particle filtering (PF) approach is presented for performing sequential geoacoustic inversion of a complex ocean acoustic environment using a moving acoustic source. This approach treats both the environmental parameters [e.g., water column sound speed profile (SSP), water depth, sediment and bottom parameters] at the source location and the source parameters (e.g., source depth, range and speed) as unknown random variables that evolve as the source moves. This allows real-time updating of the environment and accurate tracking of the moving source. As a sequential Monte Carlo technique that operates on nonlinear systems with non-Gaussian probability densities, the PF is an ideal algorithm to perform tracking of environmental and source parameters, and their uncertainties via the evolving posterior probability densities. The approach is demonstrated on both simulated data in a shallow water environment with a sloping bottom and experimental data collected during the SWellEx-96 experiment. PMID:20649203
A geometric method for optimal design of color filter arrays.
Hao, Pengwei; Li, Yan; Lin, Zhouchen; Dubois, Eric
2011-03-01
A color filter array (CFA) used in a digital camera is a mosaic of spectrally selective filters, which allows only one color component to be sensed at each pixel. The missing two components of each pixel have to be estimated by methods known as demosaicking. The demosaicking algorithm and the CFA design are crucial for the quality of the output images. In this paper, we present a CFA design methodology in the frequency domain. The frequency structure, which is shown to be just the symbolic DFT of the CFA pattern (one period of the CFA), is introduced to represent images sampled with any rectangular CFAs in the frequency domain. Based on the frequency structure, the CFA design involves the solution of a constrained optimization problem that aims at minimizing the demosaicking error. To decrease the number of parameters and speed up the parameter searching, the optimization problem is reformulated as the selection of geometric points on the boundary of a convex polygon or the surface of a convex polyhedron. Using our methodology, several new CFA patterns are found, which outperform the currently commercialized and published ones. Experiments demonstrate the effectiveness of our CFA design methodology and the superiority of our new CFA patterns. PMID:20858581
Particle Swarm Optimization with Double Learning Patterns
Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian
2016-01-01
Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. PMID:26858747
Particle Swarm Optimization with Double Learning Patterns.
Shen, Yuanxia; Wei, Linna; Zeng, Chuanhua; Chen, Jian
2016-01-01
Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants. PMID:26858747
Lagrange Interpolation Learning Particle Swarm Optimization
2016-01-01
In recent years, comprehensive learning particle swarm optimization (CLPSO) has attracted the attention of many scholars for using in solving multimodal problems, as it is excellent in preserving the particles’ diversity and thus preventing premature convergence. However, CLPSO exhibits low solution accuracy. Aiming to address this issue, we proposed a novel algorithm called LILPSO. First, this algorithm introduced a Lagrange interpolation method to perform a local search for the global best point (gbest). Second, to gain a better exemplar, one gbest, another two particle’s historical best points (pbest) are chosen to perform Lagrange interpolation, then to gain a new exemplar, which replaces the CLPSO’s comparison method. The numerical experiments conducted on various functions demonstrate the superiority of this algorithm, and the two methods are proven to be efficient for accelerating the convergence without leading the particle to premature convergence. PMID:27123982
[Numerical simulation and operation optimization of biological filter].
Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing
2014-12-01
BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10. PMID:25826934
PARTICLE REMOVAL AND HEAD LOSS DEVELOPMENT IN BIOLOGICAL FILTERS
The physical performance of granular media filters was studied under pre-chlorinated, backwash-chlorinated, and nonchlorinated conditions. Overall, biological filteration produced a high-quality water. Although effluent turbidities showed littleer difference between the perform...
Particle filter with one-step randomly delayed measurements and unknown latency probability
NASA Astrophysics Data System (ADS)
Zhang, Yonggang; Huang, Yulong; Li, Ning; Zhao, Lin
2016-01-01
In this paper, a new particle filter is proposed to solve the nonlinear and non-Gaussian filtering problem when measurements are randomly delayed by one sampling time and the latency probability of the delay is unknown. In the proposed method, particles and their weights are updated in Bayesian filtering framework by considering the randomly delayed measurement model, and the latency probability is identified by maximum likelihood criterion. The superior performance of the proposed particle filter as compared with existing methods and the effectiveness of the proposed identification method of latency probability are both illustrated in two numerical examples concerning univariate non-stationary growth model and bearing only tracking.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647
NASA Astrophysics Data System (ADS)
Ala-Luhtala, Juha; Whiteley, Nick; Heine, Kari; Piche, Robert
2016-09-01
Twisted particle filters are a class of sequential Monte Carlo methods recently introduced by Whiteley and Lee to improve the efficiency of marginal likelihood estimation in state-space models. The purpose of this article is to extend the twisted particle filtering methodology, establish accessible theoretical results which convey its rationale, and provide a demonstration of its practical performance within particle Markov chain Monte Carlo for estimating static model parameters. We derive twisted particle filters that incorporate systematic or multinomial resampling and information from historical particle states, and a transparent proof which identifies the optimal algorithm for marginal likelihood estimation. We demonstrate how to approximate the optimal algorithm for nonlinear state-space models with Gaussian noise and we apply such approximations to two examples: a range and bearing tracking problem and an indoor positioning problem with Bluetooth signal strength measurements. We demonstrate improvements over standard algorithms in terms of variance of marginal likelihood estimates and Markov chain autocorrelation for given CPU time, and improved tracking performance using estimated parameters.
Particle Filtering Equalization Method for a Satellite Communication Channel
NASA Astrophysics Data System (ADS)
Sénécal, Stéphane; Amblard, Pierre-Olivier; Cavazzana, Laurent
2004-12-01
We propose the use of particle filtering techniques and Monte Carlo methods to tackle the in-line and blind equalization of a satellite communication channel. The main difficulties encountered are the nonlinear distortions caused by the amplifier stage in the satellite. Several processing methods manage to take into account these nonlinearities but they require the knowledge of a training input sequence for updating the equalizer parameters. Blind equalization methods also exist but they require a Volterra modelization of the system which is not suited for equalization purpose for the present model. The aim of the method proposed in the paper is also to blindly restore the emitted message. To reach this goal, a Bayesian point of view is adopted. Prior knowledge of the emitted symbols and of the nonlinear amplification model, as well as the information available from the received signal, is jointly used by considering the posterior distribution of the input sequence. Such a probability distribution is very difficult to study and thus motivates the implementation of Monte Carlo simulation methods. The presentation of the equalization method is cut into two parts. The first part solves the problem for a simplified model, focusing on the nonlinearities of the model. The second part deals with the complete model, using sampling approaches previously developed. The algorithms are illustrated and their performance is evaluated using bit error rate versus signal-to-noise ratio curves.
Ultrafine particle removal by residential heating, ventilating, and air-conditioning filters.
Stephens, B; Siegel, J A
2013-12-01
This work uses an in situ filter test method to measure the size-resolved removal efficiency of indoor-generated ultrafine particles (approximately 7-100 nm) for six new commercially available filters installed in a recirculating heating, ventilating, and air-conditioning (HVAC) system in an unoccupied test house. The fibrous HVAC filters were previously rated by the manufacturers according to ASHRAE Standard 52.2 and ranged from shallow (2.5 cm) fiberglass panel filters (MERV 4) to deep-bed (12.7 cm) electrostatically charged synthetic media filters (MERV 16). Measured removal efficiency ranged from 0 to 10% for most ultrafine particles (UFP) sizes with the lowest rated filters (MERV 4 and 6) to 60-80% for most UFP sizes with the highest rated filter (MERV 16). The deeper bed filters generally achieved higher removal efficiencies than the panel filters, while maintaining a low pressure drop and higher airflow rate in the operating HVAC system. Assuming constant efficiency, a modeling effort using these measured values for new filters and other inputs from real buildings shows that MERV 13-16 filters could reduce the indoor proportion of outdoor UFPs (in the absence of indoor sources) by as much as a factor of 2-3 in a typical single-family residence relative to the lowest efficiency filters, depending in part on particle size. PMID:23590456
Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V
2015-01-01
Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690
Distributed multi-sensor particle filter for bearings-only tracking
NASA Astrophysics Data System (ADS)
Zhang, Jungen; Ji, Hongbing
2012-02-01
In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method is proposed for BOT. The sensors process their measurements at their sites using a hierarchical PF approach, which transforms the BOT problem from Cartesian coordinate to the logarithmic polar coordinate and separates the observable components from the unobservable components of the target. In the fusion centre, the target state can be estimated by utilising the multi-sensor optimal information fusion rule. Furthermore, the computation of a theoretical Cramer-Rao lower bound is given for the multi-sensor BOT problem. Simulation results illustrate that the proposed tracking method can provide better performances than the traditional PF method.
Sun, Lei; Jia, Yun-xian; Cai, Li-ying; Lin, Guo-yu; Zhao, Jin-song
2013-09-01
The spectrometric oil analysis(SOA) is an important technique for machine state monitoring, fault diagnosis and prognosis, and SOA based remaining useful life(RUL) prediction has an advantage of finding out the optimal maintenance strategy for machine system. Because the complexity of machine system, its health state degradation process can't be simply characterized by linear model, while particle filtering(PF) possesses obvious advantages over traditional Kalman filtering for dealing nonlinear and non-Gaussian system, the PF approach was applied to state forecasting by SOA, and the RUL prediction technique based on SOA and PF algorithm is proposed. In the prediction model, according to the estimating result of system's posterior probability, its prior probability distribution is realized, and the multi-step ahead prediction model based on PF algorithm is established. Finally, the practical SOA data of some engine was analyzed and forecasted by the above method, and the forecasting result was compared with that of traditional Kalman filtering method. The result fully shows the superiority and effectivity of the PMID:24369656
Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, Brian
2013-01-01
The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.
Yang, Juan; Stewart, Marc; Maupin, Gary D.; Herling, Darrell R.; Zelenyuk, Alla
2009-04-15
Diesel offers higher fuel efficiency, but produces higher exhaust particulate matter. Diesel particulate filters are presently the most efficient means to reduce these emissions. These filters typically trap particles in two basic modes: at the beginning of the exposure cycle the particles are captured in the filter holes, and at longer times the particles form a "cake" on which particles are trapped. Eventually the "cake" removed by oxidation and the cycle is repeated. We have investigated the properties and behavior of two commonly used filters: silicon carbide (SiC) and cordierite (DuraTrap® RC) by exposing them to nearly-spherical ammonium sulfate particles. We show that the transition from deep bed filtration to "cake" filtration can easily be identified by recording the change in pressure across the filters as a function of exposure. We investigated performance of these filters as a function of flow rate and particle size. The filters trap small and large particles more efficiently than particles that are ~80 to 200 nm in aerodynamic diameter. A comparison between the experimental data and a simulation using incompressible lattice-Boltzmann model shows very good qualitative agreement, but the model overpredicts the filter’s trapping efficiency.
Effect of embedded unbiasedness on discrete-time optimal FIR filtering estimates
NASA Astrophysics Data System (ADS)
Zhao, Shunyi; Shmaliy, Yuriy S.; Liu, Fei; Ibarra-Manzano, Oscar; Khan, Sanowar H.
2015-12-01
Unbiased estimation is an efficient alternative to optimal estimation when the noise statistics are not fully known and/or the model undergoes temporary uncertainties. In this paper, we investigate the effect of embedded unbiasedness (EU) on optimal finite impulse response (OFIR) filtering estimates of linear discrete time-invariant state-space models. A new OFIR-EU filter is derived by minimizing the mean square error (MSE) subject to the unbiasedness constraint. We show that the OFIR-UE filter is equivalent to the minimum variance unbiased FIR (UFIR) filter. Unlike the OFIR filter, the OFIR-EU filter does not require the initial conditions. In terms of accuracy, the OFIR-EU filter occupies an intermediate place between the UFIR and OFIR filters. Contrary to the UFIR filter which MSE is minimized by the optimal horizon of N opt points, the MSEs in the OFIR-EU and OFIR filters diminish with N and these filters are thus full-horizon. Based upon several examples, we show that the OFIR-UE filter has higher immunity against errors in the noise statistics and better robustness against temporary model uncertainties than the OFIR and Kalman filters.
Particle filtering methods for georeferencing panoramic image sequence in complex urban scenes
NASA Astrophysics Data System (ADS)
Ji, Shunping; Shi, Yun; Shan, Jie; Shao, Xiaowei; Shi, Zhongchao; Yuan, Xiuxiao; Yang, Peng; Wu, Wenbin; Tang, Huajun; Shibasaki, Ryosuke
2015-07-01
Georeferencing image sequences is critical for mobile mapping systems. Traditional methods such as bundle adjustment need adequate and well-distributed ground control points (GCP) when accurate GPS data are not available in complex urban scenes. For applications of large areas, automatic extraction of GCPs by matching vehicle-born image sequences with geo-referenced ortho-images will be a better choice than intensive GCP collection with field surveying. However, such image matching generated GCP's are highly noisy, especially in complex urban street environments due to shadows, occlusions and moving objects in the ortho images. This study presents a probabilistic solution that integrates matching and localization under one framework. First, a probabilistic and global localization model is formulated based on the Bayes' rules and Markov chain. Unlike many conventional methods, our model can accommodate non-Gaussian observation. In the next step, a particle filtering method is applied to determine this model under highly noisy GCP's. Owing to the multiple hypotheses tracking represented by diverse particles, the method can balance the strength of geometric and radiometric constraints, i.e., drifted motion models and noisy GCP's, and guarantee an approximately optimal trajectory. Carried out tests are with thousands of mobile panoramic images and aerial ortho-images. Comparing with the conventional extended Kalman filtering and a global registration method, the proposed approach can succeed even under more than 80% gross errors in GCP's and reach a good accuracy equivalent to the traditional bundle adjustment with dense and precise control.
ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435
Stillo, Andrew; Ricketts, Craig I.
2013-07-01
High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacity of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1
NASA Astrophysics Data System (ADS)
Chen, Jing; Liu, Tundong; Jiang, Hao
2016-01-01
A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.
Optimization of the performances of correlation filters by pre-processing the input plane
NASA Astrophysics Data System (ADS)
Bouzidi, F.; Elbouz, M.; Alfalou, A.; Brosseau, C.; Fakhfakh, A.
2016-01-01
We report findings on the optimization of the performances of correlation filters. First, we propound and validate an optimization of ROC curves adapted to correlation technique. Then, analysis suggests that a pre-processing of the input plane leads to a compromise between the robustness of the adapted filter and the discrimination of the inverse filter for face recognition applications. Rewardingly, our technical results demonstrate that this method is remarkably efficient to increase the performances of a VanderLugt correlator.
NASA Astrophysics Data System (ADS)
Zaugg, David A.; Samuel, Alphonso A.; Waagen, Donald E.; Schmitt, Harry A.
2004-07-01
Bearings-only tracking is widely used in the defense arena. Its value can be exploited in systems using optical sensors and sonar, among others. Non-linearity and non-Gaussian prior statistics are among the complications of bearings-only tracking. Several filters have been used to overcome these obstacles, including particle filters and multiple hypothesis extended Kalman filters (MHEKF). Particle filters can accommodate a wide range of distributions and do not need to be linearized. Because of this they seem ideally suited for this problem. A MHEKF can only approximate the prior distribution of a bearings-only tracking scenario and needs to be linearized. However, the likelihood distribution maintained for each MHEKF hypothesis demonstrates significant memory and lends stability to the algorithm, potentially enhancing tracking convergence. Also, the MHEKF is insensitive to outliers. For the scenarios under investigation, the sensor platform is tracking a moving and a stationary target. The sensor is allowed to maneuver in an attempt to maximize tracking performance. For these scenarios, we compare and contrast the acquisition time and mean-squared tracking error performance characteristics of particle filters and MHEKF via Monte Carlo simulation.
Watson, J.H.P.
1995-02-01
This paper describes the structure and properties of a novel permanently magnetised magnetic filter for fine friable radioactive material. Previously a filter was described and tested. This filter was designed so that the holes in the filter are left open as capture proceeds which means the pressure drop builds up only slowly. This filter is not suitable for friable composite particles which can be broken by mechanical forces. The structure of magnetic part of the second filter has been changed so as to strongly capture particles composed of fine particles weakly bound together which tend to break when captured. This uses a principle of assisted-capture in which coarse particles aid the capture of the fine fragments. The technique has the unfortunate consequence that the pressure drop across the filter rises faster as capture capture proceeds than the filter described previously. These filters have the following characteristics: (1) No external magnet is required. (2) No external power is required. (3) Small is size and portable. (4) Easily interchangeable. (5) Can be cleaned without demagnetising.
Goodarz Ahmadi
2002-07-01
In this project, a computational modeling approach for analyzing flow and ash transport and deposition in filter vessels was developed. An Eulerian-Lagrangian formulation for studying hot-gas filtration process was established. The approach uses an Eulerian analysis of gas flows in the filter vessel, and makes use of the Lagrangian trajectory analysis for the particle transport and deposition. Particular attention was given to the Siemens-Westinghouse filter vessel at Power System Development Facility in Wilsonville in Alabama. Details of hot-gas flow in this tangential flow filter vessel are evaluated. The simulation results show that the rapidly rotation flow in the spacing between the shroud and the vessel refractory acts as cyclone that leads to the removal of a large fraction of the larger particles from the gas stream. Several alternate designs for the filter vessel are considered. These include a vessel with a short shroud, a filter vessel with no shroud and a vessel with a deflector plate. The hot-gas flow and particle transport and deposition in various vessels are evaluated. The deposition patterns in various vessels are compared. It is shown that certain filter vessel designs allow for the large particles to remain suspended in the gas stream and to deposit on the filters. The presence of the larger particles in the filter cake leads to lower mechanical strength thus allowing for the back-pulse process to more easily remove the filter cake. A laboratory-scale filter vessel for testing the cold flow condition was designed and fabricated. A laser-based flow visualization technique is used and the gas flow condition in the laboratory-scale vessel was experimental studied. A computer model for the experimental vessel was also developed and the gas flow and particle transport patterns are evaluated.
NASA Astrophysics Data System (ADS)
Hawkes, Jeremy J.; Coakley, W. Terence; Gröschl, Martin; Benes, Ewald; Armstrong, Sian; Tasker, Paul J.; Nowotny, Helmut
2002-03-01
The quantitative performance of a ``single half-wavelength'' acoustic resonator operated at frequencies around 3 MHz as a continuous flow microparticle filter has been investigated. Standing wave acoustic radiation pressure on suspended particles (5-μm latex) drives them towards the center of the half-wavelength separation channel. Clarified suspending phase from the region closest to the filter wall is drawn away through a downstream outlet. The filtration efficiency of the device was established from continuous turbidity measurements at the filter outlet. The frequency dependence of the acoustic energy density in the aqueous particle suspension layer of the filter system was obtained by application of the transfer matrix model [H. Nowotny and E. Benes, J. Acoust. Soc. Am. 82, 513-521 (1987)]. Both the measured clearances and the calculated energy density distributions showed a maximum at the fundamental of the piezoceramic transducer and a second, significantly larger, maximum at another system's resonance not coinciding with any of the transducer or empty chamber resonances. The calculated frequency of this principal energy density maximum was in excellent agreement with the optimal clearance frequency for the four tested channel widths. The high-resolution measurements of filter performance provide, for the first time, direct verification of the matrix model predictions of the frequency dependence of acoustic energy density in the water layer.
Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo
2014-10-23
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.
Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo
2014-10-23
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less
NASA Astrophysics Data System (ADS)
Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo; Awwal, Abdul
2015-03-01
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.
Design of SLM-constrained MACE filters using simulated annealing optimization
NASA Astrophysics Data System (ADS)
Khan, Ajmal; Rajan, P. Karivaratha
1993-10-01
Among the available filters for pattern recognition, the MACE filter produces the sharpest peak with very small sidelobes. However, when these filters are implemented using practical spatial light modulators (SLMs), because of the constrained nature of the amplitude and phase modulation characteristics of the SLM, the implementation is no longer optimal. The resulting filter response does not produce high accuracy in the recognition of the test images. In this paper, this deterioration in response is overcome by designing constrained MACE filters such that the filter is allowed to have only those values of phase-amplitude combination that can be implemented on a specified SLM. The design is carried out using simulated annealing optimization technique. The algorithm developed and the results obtained on computer simulations of the designed filters are presented.
NASA Astrophysics Data System (ADS)
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing
2014-12-01
Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.
NASA Astrophysics Data System (ADS)
Han, Hua; Ding, Yongsheng; Hao, Kuangrong; Hu, Liangjian
2013-07-01
In this article, we first introduce the problem of state estimation of jump Markov nonlinear systems (JMNSs). Since the density evolution method for predictor equations satisfies Fokker-Planck-Kolmogorov equation (FPKE) in Bayes estimation, the FPKE in conjunction with Bayes' conditional density update formula can provide optimal estimation for a general continuous-discrete nonlinear filtering problem. It is well known that the analytical solution of the FPKE and Bayes' formula is extremely difficult to obtain except a few special cases. Hence, we try to design a particle filter to achieve Bayes estimation of the JMNSs. In order to test the viability of our algorithm, we apply it to multiple targets tracking in video surveillance. Before starting simulation, we introduce the 'birth' and 'death' description of targets, targets' transitional probability model, and observation probability. The experiment results show good performance of our proposed filter for multiple targets tracking.
NASA Astrophysics Data System (ADS)
Mattern, Jann Paul; Dowd, Michael; Fennel, Katja
2013-05-01
We assimilate satellite observations of surface chlorophyll into a three-dimensional biological ocean model in order to improve its state estimates using a particle filter referred to as sequential importance resampling (SIR). Particle Filters represent an alternative to other, more commonly used ensemble-based state estimation techniques like the ensemble Kalman filter (EnKF). Unlike the EnKF, Particle Filters do not require normality assumptions about the model error structure and are thus suitable for highly nonlinear applications. However, their application in oceanographic contexts is typically hampered by the high dimensionality of the model's state space. We apply SIR to a high-dimensional model with a small ensemble size (20) and modify the standard SIR procedure to avoid complications posed by the high dimensionality of the model state. Two extensions to the SIR include a simple smoother to deal with outliers in the observations, and state-augmentation which provides the SIR with parameter memory. Our goal is to test the feasibility of biological state estimation with SIR for realistic models. For this purpose we compare the SIR results to a model simulation with optimal parameters with respect to the same set of observations. By running replicates of our main experiments, we assess the robustness of our SIR implementation. We show that SIR is suitable for satellite data assimilation into biological models and that both extensions, the smoother and state-augmentation, are required for robust results and improved fit to the observations.
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
NASA Astrophysics Data System (ADS)
Kawasaki, Shoji; Hayashi, Yasuhiro; Matsuki, Junya; Kikuya, Hirotaka; Hojo, Masahide
Recently, the harmonic troubles in a distribution network are worried in the background of the increase of the connection of distributed generation (DG) and the spread of the power electronics equipments. As one of the strategies, control the harmonic voltage by installing an active filter (AF) has been researched. In this paper, the authors propose a computation method to determine the optimal allocations, gains and installation number of AFs so as to minimize the maximum value of voltage total harmonic distortion (THD) for a distribution network with DGs. The developed method is based on particle swarm optimization (PSO) which is one of the nonlinear optimization methods. Especially, in this paper, the case where the harmonic voltage or the harmonic current in a distribution network is assumed by connecting many DGs through the inverters, and the authors propose a determination method of the optimal allocation and gain of AF that has the harmonic restrictive effect in the whole distribution network. Moreover, the authors propose also about a determination method of the necessary minimum installation number of AFs, by taking into consideration also about the case where the target value of harmonic suppression cannot be reached, by one set only of AF. In order to verify the validity and effectiveness of the proposed method, the numerical simulations are carried out by using an analytical model of distribution network with DGs.
Franke, Felix; Quian Quiroga, Rodrigo; Hierlemann, Andreas; Obermayer, Klaus
2015-06-01
Spike sorting, i.e., the separation of the firing activity of different neurons from extracellular measurements, is a crucial but often error-prone step in the analysis of neuronal responses. Usually, three different problems have to be solved: the detection of spikes in the extracellular recordings, the estimation of the number of neurons and their prototypical (template) spike waveforms, and the assignment of individual spikes to those putative neurons. If the template spike waveforms are known, template matching can be used to solve the detection and classification problem. Here, we show that for the colored Gaussian noise case the optimal template matching is given by a form of linear filtering, which can be derived via linear discriminant analysis. This provides a Bayesian interpretation for the well-known matched filter output. Moreover, with this approach it is possible to compute a spike detection threshold analytically. The method can be implemented by a linear filter bank derived from the templates, and can be used for online spike sorting of multielectrode recordings. It may also be applicable to detection and classification problems of transient signals in general. Its application significantly decreases the error rate on two publicly available spike-sorting benchmark data sets in comparison to state-of-the-art template matching procedures. Finally, we explore the possibility to resolve overlapping spikes using the template matching outputs and show that they can be resolved with high accuracy. PMID:25652689
An optimal modification of a Kalman filter for time scales
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
2003-01-01
The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.
Application of extended Kalman particle filter for dynamic interference fringe processing
NASA Astrophysics Data System (ADS)
Ermolaev, Petr A.; Volynsky, Maxim A.
2016-04-01
The application of extended Kalman particle filter for dynamic estimation of interferometric signal parameters is considered. A detail description of the algorithm is given. Proposed algorithm allows obtaining satisfactory estimates of model interferometric signals even in the presence of erroneous information on model signal parameters. It provides twice as high calculation speed in comparison with conventional particle filter by reducing the number of vectors approximating probability density function of signal parameters distribution
NASA Astrophysics Data System (ADS)
Miller, R.
2015-12-01
Following the success of the implicit particle filter in twin experiments with a shallow water model of the nearshore environment, the planned next step is application to the intensive Sandy Duck data set, gathered at Duck, NC. Adaptation of the present system to the Sandy Duck data set will require construction and evaluation of error models for both the model and the data, as well as significant modification of the system to allow for the properties of the data set. Successful implementation of the particle filter promises to shed light on the details of the capabilities and limitations of shallow water models of the nearshore ocean relative to more detailed models. Since the shallow water model admits distinct dynamical regimes, reliable parameter estimation will be important. Previous work by other groups give cause for optimism. In this talk I will describe my progress toward implementation of the new system, including problems solved, pitfalls remaining and preliminary results
Research on a Lamb Wave and Particle Filter-Based On-Line Crack Propagation Prognosis Method.
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Cai, Jian; Yang, Weibo
2016-01-01
Prognostics and health management techniques have drawn widespread attention due to their ability to facilitate maintenance activities based on need. On-line prognosis of fatigue crack propagation can offer information for optimizing operation and maintenance strategies in real-time. This paper proposes a Lamb wave-particle filter (LW-PF)-based method for on-line prognosis of fatigue crack propagation which takes advantages of the possibility of on-line monitoring to evaluate the actual crack length and uses a particle filter to deal with the crack evolution and monitoring uncertainties. The piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The state space model relating to crack propagation is established by the data-driven and finite element methods. Fatigue experiments performed on hole-edge crack specimens have validated the advantages of the proposed method. PMID:26950130
Research on a Lamb Wave and Particle Filter-Based On-Line Crack Propagation Prognosis Method
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Cai, Jian; Yang, Weibo
2016-01-01
Prognostics and health management techniques have drawn widespread attention due to their ability to facilitate maintenance activities based on need. On-line prognosis of fatigue crack propagation can offer information for optimizing operation and maintenance strategies in real-time. This paper proposes a Lamb wave-particle filter (LW-PF)-based method for on-line prognosis of fatigue crack propagation which takes advantages of the possibility of on-line monitoring to evaluate the actual crack length and uses a particle filter to deal with the crack evolution and monitoring uncertainties. The piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The state space model relating to crack propagation is established by the data-driven and finite element methods. Fatigue experiments performed on hole-edge crack specimens have validated the advantages of the proposed method. PMID:26950130
Multisensor fusion for 3D target tracking using track-before-detect particle filter
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2015-05-01
This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.
PARTICLE TRANSPORT AND DEPOSITION IN THE HOT-GAS FILTER AT WILSONVILLE
Goodarz Ahmadi
1999-06-24
Particle transport and deposition in the Wilsonville hot-gas filter vessel is studied. The filter vessel contains a total of 72 filters, which are arranged in two tiers. These are modeled by six upper and one lower cylindrical effective filters. An unstructured grid of 312,797 cells generated by GAMBIT is used in the simulations. The Reynolds stress model of FLUENT{trademark} (version 5.0) code is used for evaluating the gas mean velocities and root mean-square fluctuation velocities in the vessel. The particle equation of motion includes the drag, the gravitational and the lift forces. The turbulent instantaneous fluctuation velocity is simulated by a filtered Gaussian white-noise model provided by the FLUENT code. The particle deposition patterns are evaluated, and the effect of particle size is studied. The effect of turbulent dispersion, the lift force and the gravitational force are analyzed. The results show that the deposition pattern depends on particle size. Turbulent dispersion plays an important role in transport and deposition of particles. Lift and gravitational forces affect the motion of large particles, but has no effect on small particles.
Ceramem filters for removal of particles from hot gas streams
Bishop, B.A.; Goldsmith, R.L.
1994-11-01
The need for hot gas cleanup in the power, advanced coal conversion, process and incineration industries is well documented and extensive development is being undertaken to develop and demonstrate suitable filtration technologies. In general, process conditions include (a) oxidizing or reducing atmospheres, (b) temperatures to 1800{degree}F, (c) pressures to 300 psi, and (d) potentially corrosive components in the gas stream. The most developed technologies entail the use of candle or tube filters, which suffer from fragility, lack of oxidation/corrosion resistance, and high cost. The ceramic membrane filter described below offers the potential to eliminate these limitations.
Optimized digital filtering techniques for radiation detection with HPGe detectors
NASA Astrophysics Data System (ADS)
Salathe, Marco; Kihm, Thomas
2016-02-01
This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.
Assessing consumption of bioactive micro-particles by filter-feeding Asian carp
Jensen, Nathan R.; Amberg, Jon J.; Luoma, James A.; Walleser, Liza R.; Gaikowski, Mark P.
2012-01-01
Silver carp Hypophthalmichthys molitrix (SVC) and bighead carp H. nobilis (BHC) have impacted waters in the US since their escape. Current chemical controls for aquatic nuisance species are non-selective. Development of a bioactive micro-particle that exploits filter-feeding habits of SVC or BHC could result in a new control tool. It is not fully understood if SVC or BHC will consume bioactive micro-particles. Two discrete trials were performed to: 1) evaluate if SVC and BHC consume the candidate micro-particle formulation; 2) determine what size they consume; 3) establish methods to evaluate consumption of filter-feeders for future experiments. Both SVC and BHC were exposed to small (50-100 μm) and large (150-200 μm) micro-particles in two 24-h trials. Particles in water were counted electronically and manually (microscopy). Particles on gill rakers were counted manually and intestinal tracts inspected for the presence of micro-particles. In Trial 1, both manual and electronic count data confirmed reductions of both size particles; SVC appeared to remove more small particles than large; more BHC consumed particles; SVC had fewer overall particles in their gill rakers than BHC. In Trial 2, electronic counts confirmed reductions of both size particles; both SVC and BHC consumed particles, yet more SVC consumed micro-particles compared to BHC. Of the fish that ate micro-particles, SVC consumed more than BHC. It is recommended to use multiple metrics to assess consumption of candidate micro-particles by filter-feeders when attempting to distinguish differential particle consumption. This study has implications for developing micro-particles for species-specific delivery of bioactive controls to help fisheries, provides some methods for further experiments with bioactive micro-particles, and may also have applications in aquaculture.
Mountney, John; Silage, Dennis; Obeid, Iyad
2010-01-01
Both linear and nonlinear estimation algorithms have been successfully applied as neural decoding techniques in brain machine interfaces. Nonlinear approaches such as Bayesian auxiliary particle filters offer improved estimates over other methodologies seemingly at the expense of computational complexity. Real-time implementation of particle filtering algorithms for neural signal processing may become prohibitive when the number of neurons in the observed ensemble becomes large. By implementing a parallel hardware architecture, filter performance can be improved in terms of throughput over conventional sequential processing. Such an architecture is presented here and its FPGA resource utilization is reported. PMID:21096196
Mukhopadhyay, Somparna; Hazra, Lakshminarayan
2015-11-01
Resolution capability of an optical imaging system can be enhanced by reducing the width of the central lobe of the point spread function. Attempts to achieve the same by pupil plane filtering give rise to a concomitant increase in sidelobe intensity. The mutual exclusivity between these two objectives may be considered as a multiobjective optimization problem that does not have a unique solution; rather, a class of trade-off solutions called Pareto optimal solutions may be generated. Pareto fronts in the synthesis of lossless phase-only pupil plane filters to achieve superresolution with prespecified lower limits for the Strehl ratio are explored by using the particle swarm optimization technique. PMID:26560575
Particle size for greatest penetration of HEPA filters - and their true efficiency
da Roza, R.A.
1982-12-01
The particle size that most greatly penetrates a filter is a function of filter media construction, aerosol density, and air velocity. In this paper the published results of several experiments are compared with a modern filtration theory that predicts single-fiber efficiency and the particle size of maximum penetration. For high-efficiency particulate air (HEPA) filters used under design conditions this size is calculated to be 0.21 ..mu..m diam. This is in good agreement with the experimental data. The penetration at 0.21 ..mu..m is calculated to be seven times greater than at the 0.3 ..mu..m used for testing HEPA filters. Several mechanisms by which filters may have a lower efficiency in use than when tested are discussed.
NASA Astrophysics Data System (ADS)
Xu, Dexiang
This dissertation presents a novel method of designing finite word length Finite Impulse Response (FIR) digital filters using a Real Parameter Parallel Genetic Algorithm (RPPGA). This algorithm is derived from basic Genetic Algorithms which are inspired by natural genetics principles. Both experimental results and theoretical studies in this work reveal that the RPPGA is a suitable method for determining the optimal or near optimal discrete coefficients of finite word length FIR digital filters. Performance of RPPGA is evaluated by comparing specifications of filters designed by other methods with filters designed by RPPGA. The parallel and spatial structures of the algorithm result in faster and more robust optimization than basic genetic algorithms. A filter designed by RPPGA is implemented in hardware to attenuate high frequency noise in a data acquisition system for collecting seismic signals. These studies may lead to more applications of the Real Parameter Parallel Genetic Algorithms in Electrical Engineering.
Reduced Complexity HMM Filtering With Stochastic Dominance Bounds: A Convex Optimization Approach
NASA Astrophysics Data System (ADS)
Krishnamurthy, Vikram; Rojas, Cristian R.
2014-12-01
This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds.
Optimal Filter Estimation for Lucas-Kanade Optical Flow
Sharmin, Nusrat; Brad, Remus
2012-01-01
Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.
NASAL FILTERING OF FINE PARTICLES IN CHILDREN VS. ADULTS
Nasal efficiency for removing fine particles may be affected by developmental changes in nasal structure associated with age. In healthy Caucasian children (age 6-13, n=17) and adults (age 18-28, n=11) we measured the fractional deposition (DF) of fine particles (1 and 2um MMAD)...
Capellari, Giovanni; Eftekhar Azam, Saeed; Mariani, Stefano
2015-01-01
Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated. PMID:26703615
Capellari, Giovanni; Azam, Saeed Eftekhar; Mariani, Stefano
2015-01-01
Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated. PMID:26703615
Chen, Yong; Zhang, Rong-Hua; Shang, Lei; Hu, Eric
2013-06-01
A method based on motion vectors of feature points and particle filter has been proposed and developed for an active∕moving camera for object detection and tracking purposes. The object is detected by histogram of motion vectors first, and then, on the basis of particle filter algorithm, the weighing factors are obtained via color information. In addition, re-sampling strategy and surf feature points are used to remedy the drawback of particle degeneration. Experimental results demonstrate the practicability and accuracy of the new method and are presented in the paper. PMID:23822380
Backus, Sterling J.; Kapteyn, Henry C.
2007-07-10
A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.
Leach, R.R.; Schultz, C.; Dowla, F.
1997-07-15
Development of a worldwide network to monitor seismic activity requires deployment of seismic sensors in areas which have not been well studied or may have from available recordings. Development and testing of detection and discrimination algorithms requires a robust representative set of calibrated seismic events for a given region. Utilizing events with poor signal-to-noise (SNR) can add significant numbers to usable data sets, but these events must first be adequately filtered. Source and path effects can make this a difficult task as filtering demands are highly varied as a function of distance, event magnitude, bearing, depth etc. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. In addition, filter parameters are often overly generalized or contain complicated switching. We have developed a method to provide an optimized filter for any regional or teleseismically recorded event. Recorded seismic signals contain arrival energy which is localized in frequency and time. Localized temporal signals whose frequency content is different from the frequency content of the pre-arrival record are identified using rms power measurements. The method is based on the decomposition of a time series into a set of time series signals or scales. Each scale represents a time-frequency band with a constant Q. SNR is calculated for a pre-event noise window and for a window estimated to contain the arrival. Scales with high SNR are used to indicate the band pass limits for the optimized filter.The results offer a significant improvement in SNR particularly for low SNR events. Our method provides a straightforward, optimized filter which can be immediately applied to unknown regions as knowledge of the geophysical characteristics is not required. The filtered signals can be used to map the seismic frequency response of a region and may provide improvements in travel-time picking, bearing estimation
A genetic resampling particle filter for freeway traffic-state estimation
NASA Astrophysics Data System (ADS)
Bi, Jun; Guan, Wei; Qi, Long-Tao
2012-06-01
On-line estimation of the state of traffic based on data sampled by electronic detectors is important for intelligent traffic management and control. Because a nonlinear feature exists in the traffic state, and because particle filters have good characteristics when it comes to solving the nonlinear problem, a genetic resampling particle filter is proposed to estimate the state of freeway traffic. In this paper, a freeway section of the northern third ring road in the city of Beijing in China is considered as the experimental object. By analysing the traffic-state characteristics of the freeway, the traffic is modeled based on the second-order validated macroscopic traffic flow model. In order to solve the particle degeneration issue in the performance of the particle filter, a genetic mechanism is introduced into the resampling process. The realization of a genetic particle filter for freeway traffic-state estimation is discussed in detail, and the filter estimation performance is validated and evaluated by the achieved experimental data.
An optimal numerical filter for wide-field-of-view measurements of earth-emitted radiation
NASA Technical Reports Server (NTRS)
Smith, G. L.; House, F. B.
1981-01-01
A technique is described in which all data points along an arc of the orbit may be used in an optimal numerical filter for wide-field-of-view measurements of earth emitted radiation. The statistical filter design is derived whereby the filter is required to give a minimum variance estimate of the radiative exitance at discrete points along the ground track of the satellite. An equation for the optimal numerical filter is given by minimizing the estimate error variance equation with respect to the filter weights, resulting in a discrete form of the Wiener-Hopf equation. Finally, variances of the errors in the radiant exitance can be computed along the ground track and in the cross track directions.
Chaotic Particle Swarm Optimization with Mutation for Classification
Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza
2015-01-01
In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937
Chaotic particle swarm optimization with mutation for classification.
Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza
2015-01-01
In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937
Filter performance of n99 and n95 facepiece respirators against viruses and ultrafine particles.
Eninger, Robert M; Honda, Takeshi; Adhikari, Atin; Heinonen-Tanski, Helvi; Reponen, Tiina; Grinshpun, Sergey A
2008-07-01
The performance of three filtering facepiece respirators (two models of N99 and one N95) challenged with an inert aerosol (NaCl) and three virus aerosols (enterobacteriophages MS2 and T4 and Bacillus subtilis phage)-all with significant ultrafine components-was examined using a manikin-based protocol with respirators sealed on manikins. Three inhalation flow rates, 30, 85, and 150 l min(-1), were tested. The filter penetration and the quality factor were determined. Between-respirator and within-respirator comparisons of penetration values were performed. At the most penetrating particle size (MPPS), >3% of MS2 virions penetrated through filters of both N99 models at an inhalation flow rate of 85 l min(-1). Inhalation airflow had a significant effect upon particle penetration through the tested respirator filters. The filter quality factor was found suitable for making relative performance comparisons. The MPPS for challenge aerosols was <0.1 mum in electrical mobility diameter for all tested respirators. Mean particle penetration (by count) was significantly increased when the size fraction of <0.1 mum was included as compared to particles >0.1 mum. The filtration performance of the N95 respirator approached that of the two models of N99 over the range of particle sizes tested ( approximately 0.02 to 0.5 mum). Filter penetration of the tested biological aerosols did not exceed that of inert NaCl aerosol. The results suggest that inert NaCl aerosols may generally be appropriate for modeling filter penetration of similarly sized virions. PMID:18477653
NASA Astrophysics Data System (ADS)
Raitoharju, Matti; Nurminen, Henri; Piché, Robert
2015-12-01
Indoor positioning based on wireless local area network (WLAN) signals is often enhanced using pedestrian dead reckoning (PDR) based on an inertial measurement unit. The state evolution model in PDR is usually nonlinear. We present a new linear state evolution model for PDR. In simulated-data and real-data tests of tightly coupled WLAN-PDR positioning, the positioning accuracy with this linear model is better than with the traditional models when the initial heading is not known, which is a common situation. The proposed method is computationally light and is also suitable for smoothing. Furthermore, we present modifications to WLAN positioning based on Gaussian coverage areas and show how a Kalman filter using the proposed model can be used for integrity monitoring and (re)initialization of a particle filter.
Simplifying Physical Realization of Gaussian Particle Filters with Block-Level Pipeline Control
NASA Astrophysics Data System (ADS)
Hong, Sangjin; Djurić, Petar M.; Bolić, Miodrag
2005-12-01
We present an efficient physical realization method of particle filters for real-time tracking applications. The methodology is based on block-level pipelining where data transfer between processing blocks is effectively controlled by autonomous distributed controllers. Block-level pipelining maintains inherent operational concurrency within the algorithm for high-throughput execution. The proposed use of controllers, via parameters reconfiguration, greatly simplifies the overall controller structure, and alleviates potential speed bottlenecks that may arise due to complexity of the controller. A Gaussian particle filter for bearings-only tracking problem is realized based on the presented methodology. For demonstration, individual coarse grain processing blocks comprising particle filters are synthesized using commercial FPGA. From the execution characteristics obtained from the implementation, the overall controller structure is derived according to the methodology and its temporal correctness verified using Verilog and SystemC.
Khan, T.; Ramuhalli, Pradeep; Dass, Sarat
2011-06-30
Flaw profile characterization from NDE measurements is a typical inverse problem. A novel transformation of this inverse problem into a tracking problem, and subsequent application of a sequential Monte Carlo method called particle filtering, has been proposed by the authors in an earlier publication [1]. In this study, the problem of flaw characterization from multi-sensor data is considered. The NDE inverse problem is posed as a statistical inverse problem and particle filtering is modified to handle data from multiple measurement modes. The measurement modes are assumed to be independent of each other with principal component analysis (PCA) used to legitimize the assumption of independence. The proposed particle filter based data fusion algorithm is applied to experimental NDE data to investigate its feasibility.
Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications
Moccia, Antonio
2014-01-01
Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291
A particle-filtering approach to convoy tracking in the midst of civilian traffic
NASA Astrophysics Data System (ADS)
Pollard, Evangeline; Pannetier, Benjamin; Rombaut, Michèle
2008-04-01
In the battlefield surveillance domain, ground target tracking is used to evaluate the threat. Data used for tracking is given by a Ground Moving Target Indicator (GMTI) sensor which only detects moving targets. Multiple target tracking has been widely studied but most of the algorithms have weaknesses when targets are close together, as they are in a convoy. In this work, we propose a filtering approach for convoys in the midst of civilian traffic. Inspired by particle filtering, our specific algorithm cannot be applied to all the targets because of its complexity. That is why well discriminated targets are tracked using an Interacting Multiple Model-Multiple Hypothesis Tracking (IMM-MHT), whereas the convoy targets are tracked with a specific particle filter. We make the assumption that the convoy is detected (position and number of targets). Our approach is based on an Independent Partition Particle Filter (IPPF) incorporating constraint-regions. The originality of our approach is to consider a velocity constraint (all the vehicles belonging to the convoy have the same velocity) and a group constraint. Consequently, the multitarget state vector contains all the positions of the individual targets and a single convoy velocity vector. When another target is detected crossing or overtaking the convoy, a specific algorithm is used and the non-cooperative target is tracked down an adapted particle filter. As demonstrated by our simulations, a high increase in convoy tracking performance is obtained with our approach.
Evaluation of filter media for particle number, surface area and mass penetrations.
Li, Lin; Zuo, Zhili; Japuntich, Daniel A; Pui, David Y H
2012-07-01
The National Institute for Occupational Safety and Health (NIOSH) developed a standard for respirator certification under 42 CFR Part 84, using a TSI 8130 automated filter tester with photometers. A recent study showed that photometric detection methods may not be sensitive for measuring engineered nanoparticles. Present NIOSH standards for penetration measurement are mass-based; however, the threshold limit value/permissible exposure limit for an engineered nanoparticle worker exposure is not yet clear. There is lack of standardized filter test development for engineered nanoparticles, and development of a simple nanoparticle filter test is indicated. To better understand the filter performance against engineered nanoparticles and correlations among different tests, initial penetration levels of one fiberglass and two electret filter media were measured using a series of polydisperse and monodisperse aerosol test methods at two different laboratories (University of Minnesota Particle Technology Laboratory and 3M Company). Monodisperse aerosol penetrations were measured by a TSI 8160 using NaCl particles from 20 to 300 nm. Particle penetration curves and overall penetrations were measured by scanning mobility particle sizer (SMPS), condensation particle counter (CPC), nanoparticle surface area monitor (NSAM), and TSI 8130 at two face velocities and three layer thicknesses. Results showed that reproducible, comparable filtration data were achieved between two laboratories, with proper control of test conditions and calibration procedures. For particle penetration curves, the experimental results of monodisperse testing agreed well with polydisperse SMPS measurements. The most penetrating particle sizes (MPPSs) of electret and fiberglass filter media were ~50 and 160 nm, respectively. For overall penetrations, the CPC and NSAM results of polydisperse aerosols were close to the penetration at the corresponding median particle sizes. For each filter type, power
Sun, W Y
1993-04-01
This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.
Isolated particle swarm optimization with particle migration and global best adoption
NASA Astrophysics Data System (ADS)
Tsai, Hsing-Chih; Tyan, Yaw-Yauan; Wu, Yun-Wu; Lin, Yong-Huang
2012-12-01
Isolated particle swarm optimization (IPSO) segregates particles into several sub-swarms in order to improve the ability of the global optimization. In this study, particle migration and global best adoption (gbest adoption) are used to improve IPSO. Particle migration allows particles to travel among sub-swarms, based on the fitness of the sub-swarms. The use of gbest adoption allows sub-swarms to peep at the gbest proportionally or probably after a certain number of iterations, i.e. gbest replacing, and gbest sharing, respectively. Three well-known benchmark functions are utilized to determine the parameter settings of the IPSO. Then, 13 benchmark functions are used to study the performance of the designed IPSO. Computational experience demonstrates that the designed IPSO is superior to the original version of particle swarm optimization (PSO) in terms of the accuracy and stability of the results, when isolation phenomenon, particle migration and gbest sharing are involved.
Hybrid three-dimensional variation and particle filtering for nonlinear systems
NASA Astrophysics Data System (ADS)
Leng, Hong-Ze; Song, Jun-Qiang
2013-03-01
This work addresses the problem of estimating the states of nonlinear dynamic systems with sparse observations. We present a hybrid three-dimensional variation (3DVar) and particle piltering (PF) method, which combines the advantages of 3DVar and particle-based filters. By minimizing the cost function, this approach will produce a better proposal distribution of the state. Afterwards the stochastic resampling step in standard PF can be avoided through a deterministic scheme. The simulation results show that the performance of the new method is superior to the traditional ensemble Kalman filtering (EnKF) and the standard PF, especially in highly nonlinear systems.
Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.
2011-01-01
An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (≤ 32 cm−1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm−1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445
Design of an optimal-weighted MACE filter realizable with arbitrary SLM constraints
NASA Astrophysics Data System (ADS)
Ge, Jin; Rajan, P. Karivaratha
1996-03-01
A realizable optimal weighted minimum average correlation energy (MACE) filter with arbitrary spatial light modulator (SLM) constraints is presented. The MACE filter can be considered as the cascade of two separate stages. The first stage is the prewhitener which essentially converts colored noise to white noise. The second stage is the conventional synthetic discriminant function (SDF) which is optimal for white noise, but which uses training vectors subjected to the prewhitening transformation. So the energy spectrum matrix is very important for filter design. New weight function we introduce is used to adjust the correlation energy to improve the performance of MACE filter on current SLMs. The action of the weight function is to emphasize the importance of the signal energy at some frequencies and reduce the importance of signal energy at some other frequencies so as to improve correlation plane structure. The choice of weight function which is used to enhance the noise tolerance and reduce sidelobes is related to a priori pattern recognition knowledge. An algorithm which combines an iterative optimal technique with Juday's minimum Euclidean distance (MED) method is developed for the design of the realizable optimal weighted MACE filter. The performance of the designed filter is evaluated with numerical experiments.
Roundness error assessment based on particle swarm optimization
NASA Astrophysics Data System (ADS)
Zhao, J. W.; Chen, G. Q.
2005-01-01
Roundness error assessment is always a nonlinear optimization problem without constraints. The method of particle swarm optimization (PSO) is proposed to evaluate the roundness error. PSO is an evolution algorithm derived from the behavior of preying birds. PSO regards each feasible solution as a particle (point in n-dimensional space). It initializes a swarm of random particles in the feasible region. All particles always trace two particles in which one is the best position itself; another is the best position of all particles. According to the inertia weight and two best particles, all particles update their positions and velocities according to the fitness function. After iterations, it converges to an optimized solution. The reciprocal of the error assessment objective function is adopted as the fitness. In this paper the calculating procedures with PSO are given. Finally, an assessment example is used to verify this method. The results show that the method proposed provides a new way for other form and position error assessment because it can always converge to the global optimal solution.
Automatized Parameterization of DFTB Using Particle Swarm Optimization.
Chou, Chien-Pin; Nishimura, Yoshifumi; Fan, Chin-Chai; Mazur, Grzegorz; Irle, Stephan; Witek, Henryk A
2016-01-12
We present a novel density-functional tight-binding (DFTB) parametrization toolkit developed to optimize the parameters of various DFTB models in a fully automatized fashion. The main features of the algorithm, based on the particle swarm optimization technique, are discussed, and a number of initial pilot applications of the developed methodology to molecular and solid systems are presented. PMID:26587758
Removal of Particles and Acid Gases (SO2 or HCl) with a Ceramic Filter by Addition of Dry Sorbents
Hemmer, G.; Kasper, G.; Wang, J.; Schaub, G.
2002-09-20
The present investigation intends to add to the fundamental process design know-how for dry flue gas cleaning, especially with respect to process flexibility, in cases where variations in the type of fuel and thus in concentration of contaminants in the flue gas require optimization of operating conditions. In particular, temperature effects of the physical and chemical processes occurring simultaneously in the gas-particle dispersion and in the filter cake/filter medium are investigated in order to improve the predictive capabilities for identifying optimum operating conditions. Sodium bicarbonate (NaHCO{sub 3}) and calcium hydroxide (Ca(OH){sub 2}) are known as efficient sorbents for neutralizing acid flue gas components such as HCl, HF, and SO{sub 2}. According to their physical properties (e.g. porosity, pore size) and chemical behavior (e.g. thermal decomposition, reactivity for gas-solid reactions), optimum conditions for their application vary widely. The results presented concentrate on the development of quantitative data for filtration stability and overall removal efficiency as affected by operating temperature. Experiments were performed in a small pilot unit with a ceramic filter disk of the type Dia-Schumalith 10-20 (Fig. 1, described in more detail in Hemmer 2002 and Hemmer et al. 1999), using model flue gases containing SO{sub 2} and HCl, flyash from wood bark combustion, and NaHCO{sub 3} as well as Ca(OH){sub 2} as sorbent material (particle size d{sub 50}/d{sub 84} : 35/192 {micro}m, and 3.5/16, respectively). The pilot unit consists of an entrained flow reactor (gas duct) representing the raw gas volume of a filter house and the filter disk with a filter cake, operating continuously, simulating filter cake build-up and cleaning of the filter medium by jet pulse. Temperatures varied from 200 to 600 C, sorbent stoichiometric ratios from zero to 2, inlet concentrations were on the order of 500 to 700 mg/m{sup 3}, water vapor contents ranged from
Particle Clogging in Filter Media of Embankment Dams: A Numerical and Experimental Study
NASA Astrophysics Data System (ADS)
Antoun, T.; Kanarska, Y.; Ezzedine, S. M.; Lomov, I.; Glascoe, L. G.; Smith, J.; Hall, R. L.; Woodson, S. C.
2013-12-01
The safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique which enforces the correct in-domain computational boundary conditions inside and on the boundary of the particles. The numerical code is validated to experiments conducted at the US Army Corps of Engineering and Research Development Center (ERDC). These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under
Blended particle methods with adaptive subspaces for filtering turbulent dynamical systems
NASA Astrophysics Data System (ADS)
Qi, Di; Majda, Andrew J.
2015-04-01
It is a major challenge throughout science and engineering to improve uncertain model predictions by utilizing noisy data sets from nature. Hybrid methods combining the advantages of traditional particle filters and the Kalman filter offer a promising direction for filtering or data assimilation in high dimensional turbulent dynamical systems. In this paper, blended particle filtering methods that exploit the physical structure of turbulent dynamical systems are developed. Non-Gaussian features of the dynamical system are captured adaptively in an evolving-in-time low dimensional subspace through particle methods, while at the same time statistics in the remaining portion of the phase space are amended by conditional Gaussian mixtures interacting with the particles. The importance of both using the adaptively evolving subspace and introducing conditional Gaussian statistics in the orthogonal part is illustrated here by simple examples. For practical implementation of the algorithms, finding the most probable distributions that characterize the statistics in the phase space as well as effective resampling strategies is discussed to handle realizability and stability issues. To test the performance of the blended algorithms, the forty dimensional Lorenz 96 system is utilized with a five dimensional subspace to run particles. The filters are tested extensively in various turbulent regimes with distinct statistics and with changing observation time frequency and both dense and sparse spatial observations. In real applications perfect dynamical models are always inaccessible considering the complexities in both modeling and computation of high dimensional turbulent system. The effects of model errors from imperfect modeling of the systems are also checked for these methods. The blended methods show uniformly high skill in both capturing non-Gaussian statistics and achieving accurate filtering results in various dynamical regimes with and without model errors.
Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard
2002-01-01
The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.
Li, Xiaofan; Zhao, Yubin; Zhang, Sha; Fan, Xiaopeng
2016-01-01
Particle filters (PFs) are widely used for nonlinear signal processing in wireless sensor networks (WSNs). However, the measurement uncertainty makes the WSN observations unreliable to the actual case and also degrades the estimation accuracy of the PFs. In addition to the algorithm design, few works focus on improving the likelihood calculation method, since it can be pre-assumed by a given distribution model. In this paper, we propose a novel PF method, which is based on a new likelihood fusion method for WSNs and can further improve the estimation performance. We firstly use a dynamic Gaussian model to describe the nonparametric features of the measurement uncertainty. Then, we propose a likelihood adaptation method that employs the prior information and a belief factor to reduce the measurement noise. The optimal belief factor is attained by deriving the minimum Kullback-Leibler divergence. The likelihood adaptation method can be integrated into any PFs, and we use our method to develop three versions of adaptive PFs for a target tracking system using wireless sensor network. The simulation and experimental results demonstrate that our likelihood adaptation method has greatly improved the estimation performance of PFs in a high noise environment. In addition, the adaptive PFs are highly adaptable to the environment without imposing computational complexity. PMID:27249002
Li, Xiaofan; Zhao, Yubin; Zhang, Sha; Fan, Xiaopeng
2016-01-01
Particle filters (PFs) are widely used for nonlinear signal processing in wireless sensor networks (WSNs). However, the measurement uncertainty makes the WSN observations unreliable to the actual case and also degrades the estimation accuracy of the PFs. In addition to the algorithm design, few works focus on improving the likelihood calculation method, since it can be pre-assumed by a given distribution model. In this paper, we propose a novel PF method, which is based on a new likelihood fusion method for WSNs and can further improve the estimation performance. We firstly use a dynamic Gaussian model to describe the nonparametric features of the measurement uncertainty. Then, we propose a likelihood adaptation method that employs the prior information and a belief factor to reduce the measurement noise. The optimal belief factor is attained by deriving the minimum Kullback–Leibler divergence. The likelihood adaptation method can be integrated into any PFs, and we use our method to develop three versions of adaptive PFs for a target tracking system using wireless sensor network. The simulation and experimental results demonstrate that our likelihood adaptation method has greatly improved the estimation performance of PFs in a high noise environment. In addition, the adaptive PFs are highly adaptable to the environment without imposing computational complexity. PMID:27249002
Quantum-Behaved Particle Swarm Optimization with Chaotic Search
NASA Astrophysics Data System (ADS)
Yang, Kaiqiao; Nomura, Hirosato
The chaotic search is introduced into Quantum-behaved Particle Swarm Optimization (QPSO) to increase the diversity of the swarm in the latter period of the search, so as to help the system escape from local optima. Taking full advantages of the characteristics of ergodicity and randomicity of chaotic variables, the chaotic search is carried out in the neighborhoods of the particles which are trapped into local optima. The experimental results on test functions show that QPSO with chaotic search outperforms the Particle Swarm Optimization (PSO) and QPSO.
Cardiac fiber tracking using adaptive particle filtering based on tensor rotation invariant in MRI
NASA Astrophysics Data System (ADS)
Kong, Fanhui; Liu, Wanyu; Magnin, Isabelle E.; Zhu, Yuemin
2016-03-01
Diffusion magnetic resonance imaging (dMRI) is a non-invasive method currently available for cardiac fiber tracking. However, accurate and efficient cardiac fiber tracking is still a challenge. This paper presents a probabilistic cardiac fiber tracking method based on particle filtering. In this framework, an adaptive sampling technique is presented to describe the posterior distribution of fiber orientations by adjusting the number and status of particles according to the fractional anisotropy of diffusion. An observation model is then proposed to update the weight of particles by rotating diffusion tensor from the primary eigenvector to a given fiber orientation while keeping the shape of the tensor invariant. The results on human cardiac dMRI show that the proposed method is robust to noise and outperforms conventional streamline and particle filtering techniques.
On the application of optimal wavelet filter banks for ECG signal classification
NASA Astrophysics Data System (ADS)
Hadjiloucas, S.; Jannah, N.; Hwang, F.; Galvão, R. K. H.
2014-03-01
This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.
A Novel Particle Swarm Optimization Approach for Grid Job Scheduling
NASA Astrophysics Data System (ADS)
Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith
This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.
A multiobjective memetic algorithm based on particle swarm optimization.
Liu, Dasheng; Tan, K C; Goh, C K; Ho, W K
2007-02-01
In this paper, a new memetic algorithm (MA) for multiobjective (MO) optimization is proposed, which combines the global search ability of particle swarm optimization with a synchronous local search heuristic for directed local fine-tuning. A new particle updating strategy is proposed based upon the concept of fuzzy global-best to deal with the problem of premature convergence and diversity maintenance within the swarm. The proposed features are examined to show their individual and combined effects in MO optimization. The comparative study shows the effectiveness of the proposed MA, which produces solution sets that are highly competitive in terms of convergence, diversity, and distribution. PMID:17278557
Accelerating Particle Filter Using Randomized Multiscale and Fast Multipole Type Methods.
Shabat, Gil; Shmueli, Yaniv; Bermanis, Amit; Averbuch, Amir
2015-07-01
Particle filter is a powerful tool for state tracking using non-linear observations. We present a multiscale based method that accelerates the tracking computation by particle filters. Unlike the conventional way, which calculates weights over all particles in each cycle of the algorithm, we sample a small subset from the source particles using matrix decomposition methods. Then, we apply a function extension algorithm that uses a particle subset to recover the density function for all the rest of the particles not included in the chosen subset. The computational effort is substantial especially when multiple objects are tracked concurrently. The proposed algorithm significantly reduces the computational load. By using the Fast Gaussian Transform, the complexity of the particle selection step is reduced to a linear time in n and k, where n is the number of particles and k is the number of particles in the selected subset. We demonstrate our method on both simulated and on real data such as object tracking in video sequences. PMID:26352448
NASA Astrophysics Data System (ADS)
Glascoe, L. G.; Ezzedine, S. M.; Kanarska, Y.; Lomov, I. N.; Antoun, T.; Smith, J.; Hall, R.; Woodson, S.
2014-12-01
Understanding the flow of fines, particulate sorting in porous media and fractured media during sediment transport is significant for industrial, environmental, geotechnical and petroleum technologies to name a few. For example, the safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique. The numerical code is validated to experiments conducted at the USACE and ERDC. These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. We have extended these results to more realistic heterogeneous
Optimization of magnetic switches for single particle and cell transport
NASA Astrophysics Data System (ADS)
Abedini-Nassab, Roozbeh; Murdoch, David M.; Kim, CheolGi; Yellen, Benjamin B.
2014-06-01
The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.
Optimization of magnetic switches for single particle and cell transport
Abedini-Nassab, Roozbeh; Yellen, Benjamin B.; Murdoch, David M.; Kim, CheolGi
2014-06-28
The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.
Boundary filters for vector particles passing parity breaking domains
Kolevatov, S. S.; Andrianov, A. A.
2014-07-23
The electrodynamics supplemented with a Lorenz and CPT invariance violating Chern-Simons (CS) action (Carrol-Field-Jackiw electrodynamics) is studied when the parity-odd medium is bounded by a hyperplane separating it from the vacuum. The solutions in both half-spaces are carefully discussed and for space-like boundary stitched on the boundary with help of the Bogolubov transformations. The presence of two different Fock vacua is shown. The passage of photons and massive vector mesons through a boundary between the CS medium and the vacuum of conventional Maxwell electrodynamics is investigated. Effects of reflection from a boundary (up to the total one) are revealed when vector particles escape to vacuum and income from vacuum passing the boundary.
NASA Astrophysics Data System (ADS)
Zhang, Kai; Chen, Tianning; Wang, Xiaopeng; Fang, Jianglong
2016-03-01
To explore the optimal damping mechanism of non-obstructive particle dampers (NOPDs), research on the relationship between the damping performance of NOPDs and the motion mode of damping particles in NOPDs was carried out based on the rheological properties of vibrated granular particles. Firstly, the damping performance of NOPDs under different excitation intensity and gap clearance was investigated via cantilever system experiments, and an approximate evaluation of the effective mass and effective damping of NOPDs was performed by fitting the experimental data to an equivalent single-degree-of-freedom (SDOF) system with no damping particles. Then the phase diagrams which could show the motion mode of damping particles under different excitation intensity and gap clearance were obtained via a series of vibration table tests. Moreover, the dissipation characteristic of damping particles was explored by the discrete element method (DEM). The study results indicate that when NOPDs play the optimal damping effect the granular Leidenfrost effect whereby the entire particle bed in NOPDs is levitated above the vibrating base by a layer of highly energetic particles is observed. Finally, the damping characteristics of NOPDs was explained by collisions and frictions between particle-particle and particle-wall based on the rheology behavior of damping particles and a new dissipation mechanism was first proposed for the optimal damping performance of NOPDs.
Empirical Determination of Optimal Parameters for Sodium Double-Edge Magneto-Optic Filters
NASA Astrophysics Data System (ADS)
Barry, Ian F.; Huang, Wentao; Smith, John A.; Chu, Xinzhao
2016-06-01
A method is proposed for determining the optimal temperature and magnetic field strength used to condition a sodium vapor cell for use in a sodium Double-Edge Magneto-Optic Filter (Na-DEMOF). The desirable characteristics of these filters are first defined and then analyzed over a range of temperatures and magnetic field strengths, using an IDL Faraday filter simulation adapted for the Na-DEMOF. This simulation is then compared to real behavior of a Na-DEMOF constructed for use with the Chu Research Group's STAR Na Doppler resonance-fluorescence lidar for lower atmospheric observations.
NASA Astrophysics Data System (ADS)
Zhang, Hongjuan; Hendricks-Franssen, Harrie-Jan; Han, Xujun; Vrugt, Jasper A.; Vereecken, Harry
2016-04-01
Land surface models (LSMs) resolve the water and energy balance with different parameters and state variables. Many of the parameters of these models cannot be measured directly in the field, and require calibration against flux and soil moisture data. Two LSMs are used in our work: Variable Infiltration Capacity Hydrologic Model (VIC) and the Community Land Model (CLM). Temporal variations in soil moisture content at 5, 20 and 50 cm depth in the Rollesbroich experimental watershed in Germany are simulated in both LSMs. Data assimilation (DA) provides a good way to jointly estimate soil moisture content and soil properties of the resolved soil domain. Four DA methods combined with the two LSMs are used in our work: the Ensemble Kalman Filter (EnKF) using state augmentation or dual estimation, the Residual Resampling Particle Filter (RRPF) and Markov chain Monte Carlo Particle Filter (MCMCPF). These four DA methods are tuned and calibrated for a five month period, and subsequently evaluated for another five month period. Performances of the two LSMs and the four DA methods are compared. Our results show that all DA methods improve the estimation of soil moisture content of the VIC and CLM models, especially if the soil hydraulic properties (VIC), the maximum baseflow velocity (VIC) and/or soil texture (CLM) are jointly estimated with soil moisture content. The augmentation and dual estimation methods performed slightly better than RRPF and MCMCPF in the evaluation period. The differences in simulated soil moisture content between CLM and VIC were larger than variations among the DA methods. The CLM performed better than the VIC model. The strong underestimation of soil moisture content in the third layer of the VIC model is likely related to an inadequate parameterization of groundwater drainage.
Optimization of primer specific filter metrics for the assessment of mitochondrial DNA sequence data
CURTIS, PAMELA C.; THOMAS, JENNIFER L.; PHILLIPS, NICOLE R.; ROBY, RHONDA K.
2011-01-01
Filter metrics are used as a quick assessment of sequence trace files in order to sort data into different categories, i.e. High Quality, Review, and Low Quality, without human intervention. The filter metrics consist of two numerical parameters for sequence quality assessment: trace score (TS) and contiguous read length (CRL). Primer specific settings for the TS and CRL were established using a calibration dataset of 2817 traces and validated using a concordance dataset of 5617 traces. Prior to optimization, 57% of the traces required manual review before import into a sequence analysis program, whereas after optimization only 28% of the traces required manual review. After optimization of primer specific filter metrics for mitochondrial DNA sequence data, an overall reduction of review of trace files translates into increased throughput of data analysis and decreased time required for manual review. PMID:21171863
X-RAY FLUORESCENCE ANALYSIS OF FILTER-COLLECTED AEROSOL PARTICLES
X-ray fluorescence (XRF) has become an effective technique for determining the elemental content of aerosol samples. For quantitative analysis, the aerosol particles must be collected as uniform deposits on the surface of Teflon membrane filters. An energy dispersive XRF spectrom...
Arunkumar, R; Hogancamp, Kristina U; Parsons, Michael S; Rogers, Donna M; Norton, Olin P; Nagel, Brian A; Alderman, Steven L; Waggoner, Charles A
2007-08-01
This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 30 x 30 x 29 cm(3) nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5 to 12 standard m(3)/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150 degrees C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7 standard m(3)/min, high mass concentrations (approximately 25 mg/m(3)) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160 nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions. PMID
High-efficiency particulate air filter test stand and aerosol generator for particle loading studies
NASA Astrophysics Data System (ADS)
Arunkumar, R.; Hogancamp, Kristina U.; Parsons, Michael S.; Rogers, Donna M.; Norton, Olin P.; Nagel, Brian A.; Alderman, Steven L.; Waggoner, Charles A.
2007-08-01
This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 30×30×29cm3 nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5to12standardm3/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150°C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7standardm3/min, high mass concentrations (˜25mg/m3) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions.
Integration of GPS Precise Point Positioning and MEMS-Based INS Using Unscented Particle Filter
Abd Rabbou, Mahmoud; El-Rabbany, Ahmed
2015-01-01
Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) integrated system involves nonlinear motion state and measurement models. However, the extended Kalman filter (EKF) is commonly used as the estimation filter, which might lead to solution divergence. This is usually encountered during GPS outages, when low-cost micro-electro-mechanical sensors (MEMS) inertial sensors are used. To enhance the navigation system performance, alternatives to the standard EKF should be considered. Particle filtering (PF) is commonly considered as a nonlinear estimation technique to accommodate severe MEMS inertial sensor biases and noise behavior. However, the computation burden of PF limits its use. In this study, an improved version of PF, the unscented particle filter (UPF), is utilized, which combines the unscented Kalman filter (UKF) and PF for the integration of GPS precise point positioning and MEMS-based inertial systems. The proposed filter is examined and compared with traditional estimation filters, namely EKF, UKF and PF. Tightly coupled mechanization is adopted, which is developed in the raw GPS and INS measurement domain. Un-differenced ionosphere-free linear combinations of pseudorange and carrier-phase measurements are used for PPP. The performance of the UPF is analyzed using a real test scenario in downtown Kingston, Ontario. It is shown that the use of UPF reduces the number of samples needed to produce an accurate solution, in comparison with the traditional PF, which in turn reduces the processing time. In addition, UPF enhances the positioning accuracy by up to 15% during GPS outages, in comparison with EKF. However, all filters produce comparable results when the GPS measurement updates are available. PMID:25815446
Vasudevan, V.; Kang, B.S-J.; Johnson, E.K.
2002-09-19
Ceramic barrier filtration is a leading technology employed in hot gas filtration. Hot gases loaded with ash particle flow through the ceramic candle filters and deposit ash on their outer surface. The deposited ash is periodically removed using back pulse cleaning jet, known as surface regeneration. The cleaning done by this technique still leaves some residual ash on the filter surface, which over a period of time sinters, forms a solid cake and leads to mechanical failure of the candle filter. A room temperature testing facility (RTTF) was built to gain more insight into the surface regeneration process before testing commenced at high temperature. RTTF was instrumented to obtain pressure histories during the surface regeneration process and a high-resolution high-speed imaging system was integrated in order to obtain pictures of the surface regeneration process. The objective of this research has been to utilize the RTTF to study the surface regeneration process at the convenience of room temperature conditions. The face velocity of the fluidized gas, the regeneration pressure of the back pulse and the time to build up ash on the surface of the candle filter were identified as the important parameters to be studied. Two types of ceramic candle filters were used in the study. Each candle filter was subjected to several cycles of ash build-up followed by a thorough study of the surface regeneration process at different parametric conditions. The pressure histories in the chamber and filter system during build-up and regeneration were then analyzed. The size distribution and movement of the ash particles during the surface regeneration process was studied. Effect of each of the parameters on the performance of the regeneration process is presented. A comparative study between the two candle filters with different characteristics is presented.
Integration of GPS precise point positioning and MEMS-based INS using unscented particle filter.
Abd Rabbou, Mahmoud; El-Rabbany, Ahmed
2015-01-01
Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) integrated system involves nonlinear motion state and measurement models. However, the extended Kalman filter (EKF) is commonly used as the estimation filter, which might lead to solution divergence. This is usually encountered during GPS outages, when low-cost micro-electro-mechanical sensors (MEMS) inertial sensors are used. To enhance the navigation system performance, alternatives to the standard EKF should be considered. Particle filtering (PF) is commonly considered as a nonlinear estimation technique to accommodate severe MEMS inertial sensor biases and noise behavior. However, the computation burden of PF limits its use. In this study, an improved version of PF, the unscented particle filter (UPF), is utilized, which combines the unscented Kalman filter (UKF) and PF for the integration of GPS precise point positioning and MEMS-based inertial systems. The proposed filter is examined and compared with traditional estimation filters, namely EKF, UKF and PF. Tightly coupled mechanization is adopted, which is developed in the raw GPS and INS measurement domain. Un-differenced ionosphere-free linear combinations of pseudorange and carrier-phase measurements are used for PPP. The performance of the UPF is analyzed using a real test scenario in downtown Kingston, Ontario. It is shown that the use of UPF reduces the number of samples needed to produce an accurate solution, in comparison with the traditional PF, which in turn reduces the processing time. In addition, UPF enhances the positioning accuracy by up to 15% during GPS outages, in comparison with EKF. However, all filters produce comparable results when the GPS measurement updates are available. PMID:25815446
Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah
2015-01-01
The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches—Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims. PMID:25978493
Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah
2015-01-01
The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims. PMID:25978493
Electronic enclosure design using distributed particle swarm optimization
NASA Astrophysics Data System (ADS)
Scriven, Ian; Lu, Junwei; Lewis, Andrew
2013-02-01
This article proposes a method for designing electromagnetic compatibility shielding enclosures using a peer-to-peer based distributed optimization system based on a modified particle swarm optimization algorithm. This optimization system is used to obtain optimal solutions to a shielding enclosure design problem efficiently with respect to both electromagnetic shielding efficiency and thermal performance. During the optimization procedure it becomes evident that optimization algorithms and computational models must be properly matched in order to achieve efficient operation. The proposed system is designed to be tolerant of faults and resource heterogeneity, and as such would find use in environments where large-scale computing resources are not available, such as smaller engineering companies, where it would allow computer-aided design by optimization using existing resources with little to no financial outlay.
Ingle, Atul; Varghese, Tomy
2014-01-01
Tissue stiffness estimation plays an important role in cancer detection and treatment. The presence of stiffer regions in healthy tissue can be used as an indicator for the possibility of pathological changes. Electrode vibration elastography involves tracking of a mechanical shear wave in tissue using radio-frequency ultrasound echoes. Based on appropriate assumptions on tissue elasticity, this approach provides a direct way of measuring tissue stiffness from shear wave velocity, and enabling visualization in the form of tissue stiffness maps. In this study, two algorithms for shear wave velocity reconstruction in an electrode vibration setup are presented. The first method models the wave arrival time data using a hidden Markov model whose hidden states are local wave velocities that are estimated using a particle filter implementation. This is compared to a direct optimization-based function fitting approach that uses sequential quadratic programming to estimate the unknown velocities and locations of interfaces. The mean shear wave velocities obtained using the two algorithms are within 10%of each other. Moreover, the Young’s modulus estimates obtained from an incompressibility assumption are within 15 kPa of those obtained from the true stiffness data obtained from mechanical testing. Based on visual inspection of the two filtering algorithms, the particle filtering method produces smoother velocity maps. PMID:25285187
Optimal design of plate-fin heat exchangers by particle swarm optimization
NASA Astrophysics Data System (ADS)
Yousefi, M.; Darus, A. N.
2011-12-01
This study explores the application of Particle Swarm Optimization (PSO) for optimization of a cross-flow plate fin heat exchanger. Minimization total annual cost is the target of optimization. Seven design parameters, namely, heat exchanger length at hot and cold sides, fin height, fin frequency, fin thickness, fin-strip length and number of hot side layers are selected as optimization variables. A case study from the literature proves the effectiveness of the proposed algorithm in case of achieving more accurate results.
Two-stage hybrid optimization of fiber Bragg gratings for design of linear phase filters.
Zheng, Rui Tao; Ngo, Nam Quoc; Le Binh, Nguyen; Tjin, Swee Chuan
2004-12-01
We present a new hybrid optimization method for the synthesis of fiber Bragg gratings (FBGs) with complex characteristics. The hybrid optimization method is a two-tier search that employs a global optimization algorithm [i.e., the tabu search (TS) algorithm] and a local optimization method (i.e., the quasi-Netwon method). First the TS global optimization algorithm is used to find a "promising" FBG structure that has a spectral response as close as possible to the targeted spectral response. Then the quasi-Newton local optimization method is applied to further optimize the FBG structure obtained from the TS algorithm to arrive at a targeted spectral response. A dynamic mechanism for weighting of different requirements of the spectral response is employed to enhance the optimization efficiency. To demonstrate the effectiveness of the method, the synthesis of three linear-phase optical filters based on FBGs with different grating lengths is described. PMID:15603077
Improved Particle Swarm Optimization for Global Optimization of Unimodal and Multimodal Functions
NASA Astrophysics Data System (ADS)
Basu, Mousumi
2015-07-01
Particle swarm optimization (PSO) performs well for small dimensional and less complicated problems but fails to locate global minima for complex multi-minima functions. This paper proposes an improved particle swarm optimization (IPSO) which introduces Gaussian random variables in velocity term. This improves search efficiency and guarantees a high probability of obtaining the global optimum without significantly impairing the speed of convergence and the simplicity of the structure of particle swarm optimization. The algorithm is experimentally validated on 17 benchmark functions and the results demonstrate good performance of the IPSO in solving unimodal and multimodal problems. Its high performance is verified by comparing with two popular PSO variants.
Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling
Safak, Erdal
1989-01-01
This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.
Thermal design of an electric motor using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Jandaud, P.-O.; Harmand, S.; Fakes, M.
2012-11-01
In this paper, flow inside an electric machine called starter-alternator is studied parametrically with CFD in order to be used by a thermal lumped model coupled to an optimization algorithm using Particle Swarm Optimization (PSO). In a first case, the geometrical parameters are symmetric allowing us to model only one side of the machine. The optimized thermal results are not conclusive. In a second case, all the parameters are independent. In this case, the flow is strongly influenced by the dissymmetry. Optimization results are this time a clear improvement compared to the original machine.
NASA Astrophysics Data System (ADS)
Erdogan, Eren; Onur Karslioglu, Mahmut; Durmaz, Murat; Aghakarimi, Armin
2014-05-01
In this study, particle filter (PF) which is mainly based on the Monte Carlo simulation technique has been carried out for polynomial modeling of the local ionospheric conditions above the selected ground based stations. Less sensitivity to the errors caused by linearization of models and the effect of unknown or unmodeled components in the system model is one of the advantages of the particle filter as compared to the Kalman filter which is commonly used as a recursive filtering method in VTEC modeling. Besides, probability distribution of the system models is not necessarily required to be Gaussian. In this work third order polynomial function has been incorporated into the particle filter implementation to represent the local VTEC distribution. Coefficients of the polynomial model presenting the ionospheric parameters and the receiver inter frequency biases are the unknowns forming the state vector which has been estimated epoch-wise for each ground station. To consider the time varying characteristics of the regional VTEC distribution, dynamics of the state vector parameters changing permanently have been modeled using the first order Gauss-Markov process. In the processing of the particle filtering, multi-variety probability distribution of the state vector through the time has been approximated by means of randomly selected samples and their associated weights. A known drawback of the particle filtering is that the increasing number of the state vector parameters results in an inefficient filter performance and requires more samples to represent the probability distribution of the state vector. Considering the total number of unknown parameters for all ground stations, estimation of these parameters which were inserted into a single state vector has caused the particle filter to produce inefficient results. To solve this problem, the PF implementation has been carried out separately for each ground station at current time epochs. After estimation of unknown
Ultrafine particle emission from incinerators: the role of the fabric filter.
Buonanno, G; Scungio, M; Stabile, L; Tirler, W
2012-01-01
Incinerators are claimed to be responsible of particle and gaseous emissions: to this purpose Best Available Techniques (BAT) are used in the flue-gas treatment sections leading to pollutant emission lower than established threshold limit values. As regard particle emission, only a mass-based threshold limit is required by the regulatory authorities. However; in the last years the attention of medical experts moved from coarse and fine particles towards ultrafine particles (UFPs; diameter less than 0.1 microm), mainly emitted by combustion processes. According to toxicological and epidemiological studies, ultrafine particles could represent a risk for health and environment. Therefore, it is necessary to quantify particle emissions from incinerators also to perform an exposure assessment for the human populations living in their surrounding areas. A further topic to be stressed in the UFP emission from incinerators is the particle filtration efficiency as function of different flue-gas treatment sections. In fact, it could be somehow important to know which particle filtration method is able to assure high abatement efficiency also in terms of UFPs. To this purpose, in the present work experimental results in terms of ultrafine particle emissions from several incineration plants are reported. Experimental campaigns were carried out in the period 2007-2010 by measuring UFP number distributions and total concentrations at the stack of five plants through condensation particle counters and mobility particle sizer spectrometers. Average total particle number concentrations ranging from 0.4 x 10(3) to 6.0 x 10(3) particles cm(-3) were measured at the stack of the analyzed plants. Further experimental campaigns were performed to characterize particle levels before the fabric filters in two of the analyzed plants in order to deepen their particle reduction effect; particle concentrations higher than 1 x 10(7) particles cm(-3) were measured, leading to filtration
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Support vector machine based on adaptive acceleration particle swarm optimization.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Optimal filter parameters for low SNR seismograms as a function of station and event location
NASA Astrophysics Data System (ADS)
Leach, Richard R.; Dowla, Farid U.; Schultz, Craig A.
1999-06-01
Global seismic monitoring requires deployment of seismic sensors worldwide, in many areas that have not been studied or have few useable recordings. Using events with lower signal-to-noise ratios (SNR) would increase the amount of data from these regions. Lower SNR events can add significant numbers to data sets, but recordings of these events must be carefully filtered. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. To reduce this laborious process, we have developed an automated method to provide optimal filters for low SNR regional or teleseismic events. As seismic signals are often localized in frequency and time with distinct time-frequency characteristics, our method is based on the decomposition of a time series into a set of subsignals, each representing a band with f/Δ f constant (constant Q). The SNR is calculated on the pre-event noise and signal window. The band pass signals with high SNR are used to indicate the cutoff filter limits for the optimized filter. Results indicate a significant improvement in SNR, particularly for low SNR events. The method provides an optimum filter which can be immediately applied to unknown regions. The filtered signals are used to map the seismic frequency response of a region and may provide improvements in travel-time picking, azimuth estimation, regional characterization, and event detection. For example, when an event is detected and a preliminary location is determined, the computer could automatically select optimal filter bands for data from non-reporting stations. Results are shown for a set of low SNR events as well as 379 regional and teleseismic events recorded at stations ABKT, KIV, and ANTO in the Middle East.
Particle Density Using Deposition Filters at the Full Scale RDD Experiments.
Berg, Rodney; Gilhuly, Colleen; Korpach, Ed; Ungar, Kurt
2016-05-01
During the Full-Scale Radiological Dispersal Device (FSRDD) Field Trials carried out in Suffield, Alberta, Canada, several suites of detection equipment and software models were used to measure and characterize the ground deposition. The FSRDD Field Trials were designed to disperse radioactive lanthanum of known activity to better understand such an event. This paper focuses on one means of measuring both concentration and the particle size distribution of the deposition using electrostatic filters placed around the trial site to collect deposited particles for analysis. The measurements made from ground deposition filters provided a basis to guide modeling and validate results by giving insight on how particles are distributed by a plume. PMID:27023034
Gaussian mixture sigma-point particle filter for optical indoor navigation system
NASA Astrophysics Data System (ADS)
Zhang, Weizhi; Gu, Wenjun; Chen, Chunyi; Chowdhury, M. I. S.; Kavehrad, Mohsen
2013-12-01
With the fast growing and popularization of smart computing devices, there is a rise in demand for accurate and reliable indoor positioning. Recently, systems using visible light communications (VLC) technology have been considered as candidates for indoor positioning applications. A number of researchers have reported that VLC-based positioning systems could achieve position estimation accuracy in the order of centimeter. This paper proposes an Indoors navigation environment, based on visible light communications (VLC) technology. Light-emitting-diodes (LEDs), which are essentially semiconductor devices, can be easily modulated and used as transmitters within the proposed system. Positioning is realized by collecting received-signal-strength (RSS) information on the receiver side, following which least square estimation is performed to obtain the receiver position. To enable tracking of user's trajectory and reduce the effect of wild values in raw measurements, different filters are employed. In this paper, by computer simulations we have shown that Gaussian mixture Sigma-point particle filter (GM-SPPF) outperforms other filters such as basic Kalman filter and sequential importance-resampling particle filter (SIR-PF), at a reasonable computational cost.
Alderman, Steven L; Parsons, Michael S; Hogancamp, Kristina U; Waggoner, Charles A
2008-11-01
High-efficiency particulate air (HEPA) filters are widely used to control particulate matter emissions from processes that involve management or treatment of radioactive materials. Section FC of the American Society of Mechanical Engineers AG-1 Code on Nuclear Air and Gas Treatment currently restricts media velocity to a maximum of 2.5 cm/sec in any application where this standard is invoked. There is some desire to eliminate or increase this media velocity limit. A concern is that increasing media velocity will result in higher emissions of ultrafine particles; thus, it is unlikely that higher media velocities will be allowed without data to demonstrate the effect of media velocity on removal of ultrafine particles. In this study, the performance of nuclear grade HEPA filters, with respect to filter efficiency and most penetrating particle size, was evaluated as a function of media velocity. Deep-pleat nuclear grade HEPA filters (31 cm x 31 cm x 29 cm) were evaluated at media velocities ranging from 2.0 to 4.5 cm/sec using a potassium chloride aerosol challenge having a particle size distribution centered near the HEPA filter most penetrating particle size. Filters were challenged under two distinct mass loading rate regimes through the use of or exclusion of a 3 microm aerodynamic diameter cut point cyclone. Filter efficiency and most penetrating particle size measurements were made throughout the duration of filter testing. Filter efficiency measured at the onset of aerosol challenge was noted to decrease with increasing media velocity, with values ranging from 99.999 to 99.977%. The filter most penetrating particle size recorded at the onset of testing was noted to decrease slightly as media velocity was increased and was typically in the range of 110-130 nm. Although additional testing is needed, these findings indicate that filters operating at media velocities up to 4.5 cm/sec will meet or exceed current filter efficiency requirements. Additionally
Richardson, R B; Hegyi, G; Starling, S C
2003-01-01
Methods have been developed to assess the size distribution of alpha emitting particles of reactor fuel of known composition captured on air sampler filters. The sizes of uranium oxide and plutonium oxide particles were determined using a system based on CR-39 solid-state nuclear track detectors. The CR-39 plastic was exposed to the deposited particles across a 400 microm airgap. The exposed CR-39 was chemically etched to reveal clusters of tracks radially dispersed from central points. The number and location of the tracks were determined using an optical microscope with an XY motorised table and image analysis software. The sample mounting arrangement allowed individual particles to be simultaneously viewed with their respective track cluster. The predicted diameters correlated with the actual particle diameters, as measured using the optical microscope. The efficacy of the technique was demonstrated with particles of natural uranium oxide (natUO2) of known size, ranging from 4 to 150 microm in diameter. Two personal air sampler (PAS) filters contaminated with actinide particles were placed against CR-39 and estimated to have size distributions of 0.8 and 1.0 microm activity median aerodynamic diameter (AMAD). PMID:14526944
Microscopy with spatial filtering for sorting particles and monitoring subcellular morphology
NASA Astrophysics Data System (ADS)
Zheng, Jing-Yi; Qian, Zhen; Pasternack, Robert M.; Boustany, Nada N.
2009-02-01
Optical scatter imaging (OSI) was developed to non-invasively track real-time changes in particle morphology with submicron sensitivity in situ without exogenous labeling, cell fixing, or organelle isolation. For spherical particles, the intensity ratio of wide-to-narrow angle scatter (OSIR, Optical Scatter Image Ratio) was shown to decrease monotonically with diameter and agree with Mie theory. In living cells, we recently reported this technique is able to detect mitochondrial morphological alterations, which were mediated by the Bcl-xL transmembrane domain, and could not be observed by fluorescence or differential interference contrast images. Here we further extend the ability of morphology assessment by adopting a digital micromirror device (DMD) for Fourier filtering. When placed in the Fourier plane the DMD can be used to select scattering intensities at desired combination of scattering angles. We designed an optical filter bank consisting of Gabor-like filters with various scales and rotations based on Gabor filters, which have been widely used for localization of spatial and frequency information in digital images and texture analysis. Using a model system consisting of mixtures of polystyrene spheres and bacteria, we show how this system can be used to sort particles on a microscopic slide based on their size, orientation and aspect ratio. We are currently applying this technique to characterize the morphology of subcellular organelles to help understand fundamental biological processes.
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.
NASA Astrophysics Data System (ADS)
Wu, Jingjing; Hu, Shiqiang; Wang, Yang
2011-09-01
Particle probability hypothesis density (PHD) filter-based visual trackers have achieved considerable success in the visual tracking field. But position measurements based on detection may not have enough ability to discriminate an object from clutter, and accurate state extraction cannot be obtained in the original PHD filtering framework, especially when targets can appear, disappear, merge, or split at any time. To meet the limitations, the proposed algorithm combines a color histogram of a target and the temporal dynamics in a unifying framework and a Gaussian mixture model clustering method for efficient state extraction is designed. The proposed tracker can improve the accuracy of state estimation in tracking a variable number of objects.
Estimation of the Dynamic States of Synchronous Machines Using an Extended Particle Filter
Zhou, Ning; Meng, Da; Lu, Shuai
2013-11-11
In this paper, an extended particle filter (PF) is proposed to estimate the dynamic states of a synchronous machine using phasor measurement unit (PMU) data. A PF propagates the mean and covariance of states via Monte Carlo simulation, is easy to implement, and can be directly applied to a non-linear system with non-Gaussian noise. The extended PF modifies a basic PF to improve robustness. Using Monte Carlo simulations with practical noise and model uncertainty considerations, the extended PF’s performance is evaluated and compared with the basic PF and an extended Kalman filter (EKF). The extended PF results showed high accuracy and robustness against measurement and model noise.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem. PMID:27505357
Applying a fully nonlinear particle filter on a coupled ocean-atmosphere climate model
NASA Astrophysics Data System (ADS)
Browne, Philip; van Leeuwen, Peter Jan; Wilson, Simon
2014-05-01
It is a widely held assumption that particle filters are not applicable in high-dimensional systems due to filter degeneracy, commonly called the curse of dimensionality. This is only true of naive particle filters, and indeed it has been shown much more advanced methods perform particularly well on systems of dimension up to 216 ≡ 6.5 × 104. In this talk we will present results from using the equivalent weights particle filter in twin experiments with the global climate model HadCM3. These experiments have a number of notable features. Firstly the sheer size of model in use is substantially larger than has been previously achieved. The model has state dimension approximately 4 × 106 and approximately 4 × 104 observations per analysis step. This is 2 orders of magnitude more than has been achieved with a particle filter in the geosciences. Secondly, the use of a fully nonlinear data assimilation technique to initialise a climate model gives us the possibility to find non-Gaussian estimates for the current state of the climate. In doing so we may find that the same model may demonstrate multiple likely scenarios for forecasts on a multi-annular/decadal timescale. The experiments consider to assimilating artificial sea surface temperatures daily for several years. We will discuss how an ensemble based method for assimilation in a coupled system avoids issues faced by variational methods. Practical details of how the experiments were carried out, specifically the use of the EMPIRE data assimilation framework, will be discussed. The results from applying the nonlinear data assimilation method can always be improved through having a better representation of the model error covariance matrix. We will discuss the representation which we have used for this matrix, and in particular, how it was generated from the coupled system.
Continuous collection of soluble atmospheric particles with a wetted hydrophilic filter.
Takeuchi, Masaki; Ullah, S M Rahmat; Dasgupta, Purnendu K; Collins, Donald R; Williams, Allen
2005-12-15
Approximately one-third of the area (14-mm diameter of a 25-mm diameter) of a 5-microm uniform pore size polycarbonate filter is continuously wetted by a 0.25 mL/min water mist. The water forms a continuous thin film on the filter and percolates through it. The flowing water substantially reduces the effective pore size of the filter. At the operational air sampling flow rate of 1.5 standard liters per minute, such a particle collector (PC) efficiently captures particles down to very small size. As determined by fluorescein-tagged NaCl aerosol generated by a vibrating orifice aerosol generator, the capture efficiency was 97.7+% for particle aerodynamic diameters ranging from 0.28 to 3.88 microm. Further, 55.3 and 80.3% of 25- and 100-nm (NH4)2SO4 particles generated by size classification with a differential mobility analyzer were respectively collected by the device. The PC is integrally coupled with a liquid collection reservoir. The liquid effluent from the wetted filter collector, bearing the soluble components of the aerosol, can be continuously collected or periodically withdrawn. The latter strategy permits the use of a robust syringe pump for the purpose. Coupled with a PM2.5 cyclone inlet and a membrane-based parallel plate denuder at the front end and an ion chromatograph at the back end, the PC readily operated for at least 4-week periods without filter replacement or any other maintenance. PMID:16351153
Optimization of high speed pipelining in FPGA-based FIR filter design using genetic algorithm
NASA Astrophysics Data System (ADS)
Meyer-Baese, Uwe; Botella, Guillermo; Romero, David E. T.; Kumm, Martin
2012-06-01
This paper compares FPGA-based full pipelined multiplierless FIR filter design options. Comparison of Distributed Arithmetic (DA), Common Sub-Expression (CSE) sharing and n-dimensional Reduced Adder Graph (RAG-n) multiplierless filter design methods in term of size, speed, and A*T product are provided. Since DA designs are table-based and CSE/RAG-n designs are adder-based, FPGA synthesis design data are used for a realistic comparison. Superior results of a genetic algorithm based optimization of pipeline registers and non-output fundamental coefficients are shown. FIR filters (posted as open source by Kastner et al.) for filters in the length from 6 to 151 coefficients are used.
Preparation and optimization of the laser thin film filter
NASA Astrophysics Data System (ADS)
Su, Jun-hong; Wang, Wei; Xu, Jun-qi; Cheng, Yao-jin; Wang, Tao
2014-08-01
A co-colored thin film device for laser-induced damage threshold test system is presented in this paper, to make the laser-induced damage threshold tester operating at 532nm and 1064nm band. Through TFC simulation software, a film system of high-reflection, high -transmittance, resistance to laser damage membrane is designed and optimized. Using thermal evaporation technique to plate film, the optical properties of the coating and performance of the laser-induced damage are tested, and the reflectance and transmittance and damage threshold are measured. The results show that, the measured parameters, the reflectance R >= 98%@532nm, the transmittance T >= 98%@1064nm, the laser-induced damage threshold LIDT >= 4.5J/cm2 , meet the design requirements, which lays the foundation of achieving laser-induced damage threshold multifunction tester.
Genetic algorithm and particle swarm optimization combined with Powell method
NASA Astrophysics Data System (ADS)
Bento, David; Pinho, Diana; Pereira, Ana I.; Lima, Rui
2013-10-01
In recent years, the population algorithms are becoming increasingly robust and easy to use, based on Darwin's Theory of Evolution, perform a search for the best solution around a population that will progress according to several generations. This paper present variants of hybrid genetic algorithm - Genetic Algorithm and a bio-inspired hybrid algorithm - Particle Swarm Optimization, both combined with the local method - Powell Method. The developed methods were tested with twelve test functions from unconstrained optimization context.
NASA Astrophysics Data System (ADS)
Ding, Ze-Min; Chen, Lin-Gen; Ge, Yan-Lin; Sun, Feng-Rui
2016-04-01
A theoretical model for energy selective electron (ESE) heat pumps operating with two-dimensional electron reservoirs is established in this study. In this model, a double-resonance energy filter operating with a total momentum filtering mechanism is considered for the transmission of electrons. The optimal thermodynamic performance of the ESE heat pump devices is also investigated. Numerical calculations show that the heating load of the device with two resonances is larger, whereas the coefficient of performance (COP) is lower than the ESE heat pump when considering a single-resonance filter. The performance characteristics of the ESE heat pumps in the total momentum filtering condition are generally superior to those with a conventional filtering mechanism. In particular, the performance characteristics of the ESE heat pumps considering a conventional filtering mechanism are vastly different from those of a device with total momentum filtering, which is induced by extra electron momentum in addition to the horizontal direction. Parameters such as resonance width and energy spacing are found to be associated with the performance of the electron system.
An optimal target-filter system for electron beam generated x-ray spectra
Hsu, Hsiao-Hua; Vasilik, D.G.; Chen, J.
1994-04-01
An electron beam generated x-ray spectrum consists of characteristic x rays of the target and continuous bremsstrahlung. The percentage of characteristic x rays over the entire energy spectrum depends on the beam energy and the filter thickness. To determine the optimal electron beam energy and filter thickness, one can either conduct many experimental measurements, or perform a series of Monte Carlo simulations. Monte Carlo simulations are shown to be an efficient tool for determining the optimal target-filter system for electron beam generated x-ray spectra. Three of the most commonly used low-energy x-ray metal targets (Cu, Zn and Mo) are chosen for this study to illustrate the power of Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Ye, Hong-Ling; Wang, Wei-Wei; Chen, Ning; Sui, Yun-Kang
2016-08-01
In this paper, a model of topology optimization with linear buckling constraints is established based on an independent and continuous mapping method to minimize the plate/shell structure weight. A composite exponential function (CEF) is selected as filtering functions for element weight, the element stiffness matrix and the element geometric stiffness matrix, which recognize the design variables, and to implement the changing process of design variables from "discrete" to "continuous" and back to "discrete". The buckling constraints are approximated as explicit formulations based on the Taylor expansion and the filtering function. The optimization model is transformed to dual programming and solved by the dual sequence quadratic programming algorithm. Finally, three numerical examples with power function and CEF as filter function are analyzed and discussed to demonstrate the feasibility and efficiency of the proposed method.
A Neural Network-Based Optimal Spatial Filter Design Method for Motor Imagery Classification
Yuksel, Ayhan; Olmez, Tamer
2015-01-01
In this study, a novel spatial filter design method is introduced. Spatial filtering is an important processing step for feature extraction in motor imagery-based brain-computer interfaces. This paper introduces a new motor imagery signal classification method combined with spatial filter optimization. We simultaneously train the spatial filter and the classifier using a neural network approach. The proposed spatial filter network (SFN) is composed of two layers: a spatial filtering layer and a classifier layer. These two layers are linked to each other with non-linear mapping functions. The proposed method addresses two shortcomings of the common spatial patterns (CSP) algorithm. First, CSP aims to maximize the between-classes variance while ignoring the minimization of within-classes variances. Consequently, the features obtained using the CSP method may have large within-classes variances. Second, the maximizing optimization function of CSP increases the classification accuracy indirectly because an independent classifier is used after the CSP method. With SFN, we aimed to maximize the between-classes variance while minimizing within-classes variances and simultaneously optimizing the spatial filter and the classifier. To classify motor imagery EEG signals, we modified the well-known feed-forward structure and derived forward and backward equations that correspond to the proposed structure. We tested our algorithm on simple toy data. Then, we compared the SFN with conventional CSP and its multi-class version, called one-versus-rest CSP, on two data sets from BCI competition III. The evaluation results demonstrate that SFN is a good alternative for classifying motor imagery EEG signals with increased classification accuracy. PMID:25933101
A neural network-based optimal spatial filter design method for motor imagery classification.
Yuksel, Ayhan; Olmez, Tamer
2015-01-01
In this study, a novel spatial filter design method is introduced. Spatial filtering is an important processing step for feature extraction in motor imagery-based brain-computer interfaces. This paper introduces a new motor imagery signal classification method combined with spatial filter optimization. We simultaneously train the spatial filter and the classifier using a neural network approach. The proposed spatial filter network (SFN) is composed of two layers: a spatial filtering layer and a classifier layer. These two layers are linked to each other with non-linear mapping functions. The proposed method addresses two shortcomings of the common spatial patterns (CSP) algorithm. First, CSP aims to maximize the between-classes variance while ignoring the minimization of within-classes variances. Consequently, the features obtained using the CSP method may have large within-classes variances. Second, the maximizing optimization function of CSP increases the classification accuracy indirectly because an independent classifier is used after the CSP method. With SFN, we aimed to maximize the between-classes variance while minimizing within-classes variances and simultaneously optimizing the spatial filter and the classifier. To classify motor imagery EEG signals, we modified the well-known feed-forward structure and derived forward and backward equations that correspond to the proposed structure. We tested our algorithm on simple toy data. Then, we compared the SFN with conventional CSP and its multi-class version, called one-versus-rest CSP, on two data sets from BCI competition III. The evaluation results demonstrate that SFN is a good alternative for classifying motor imagery EEG signals with increased classification accuracy. PMID:25933101
Large particle penetration through N95 respirator filters and facepiece leaks with cyclic flow.
Cho, Kyungmin Jacob; Reponen, Tiina; McKay, Roy; Shukla, Rakesh; Haruta, Hiroki; Sekar, Padmini; Grinshpun, Sergey A
2010-01-01
The aim of this study was to investigate respirator filter and faceseal penetration of particles representing bacterial and fungal spore size ranges (0.7-4 mum). First, field experiments were conducted to determine workplace protection factors (WPFs) for a typical N95 filtering facepiece respirator (FFR). These data (average WPF = 515) were then used to position the FFR on a manikin to simulate realistic donning conditions for laboratory experiments. Filter penetration was also measured after the FFR was fully sealed on the manikin face. This value was deducted from the total penetration (obtained from tests with the partially sealed FFR) to determine the faceseal penetration. All manikin experiments were repeated using three sinusoidal breathing flow patterns corresponding to mean inspiratory flow rates of 15, 30, and 85 l min(-1). The faceseal penetration varied from 0.1 to 1.1% and decreased with increasing particle size (P < 0.001) and breathing rate (P < 0.001). The fractions of aerosols penetrating through the faceseal leakage varied from 0.66 to 0.94. In conclusion, even for a well-fitting FFR respirator, most particle penetration occurs through faceseal leakage, which varies with breathing flow rate and particle size. PMID:19700488
Sun, Jun; Fang, Wei; Wu, Xiaojun; Palade, Vasile; Xu, Wenbo
2012-01-01
Quantum-behaved particle swarm optimization (QPSO), motivated by concepts from quantum mechanics and particle swarm optimization (PSO), is a probabilistic optimization algorithm belonging to the bare-bones PSO family. Although it has been shown to perform well in finding the optimal solutions for many optimization problems, there has so far been little analysis on how it works in detail. This paper presents a comprehensive analysis of the QPSO algorithm. In the theoretical analysis, we analyze the behavior of a single particle in QPSO in terms of probability measure. Since the particle's behavior is influenced by the contraction-expansion (CE) coefficient, which is the most important parameter of the algorithm, the goal of the theoretical analysis is to find out the upper bound of the CE coefficient, within which the value of the CE coefficient selected can guarantee the convergence or boundedness of the particle's position. In the experimental analysis, the theoretical results are first validated by stochastic simulations for the particle's behavior. Then, based on the derived upper bound of the CE coefficient, we perform empirical studies on a suite of well-known benchmark functions to show how to control and select the value of the CE coefficient, in order to obtain generally good algorithmic performance in real world applications. Finally, a further performance comparison between QPSO and other variants of PSO on the benchmarks is made to show the efficiency of the QPSO algorithm with the proposed parameter control and selection methods. PMID:21905841
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models.
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
Particle filtering for sensor-to-sensor self-calibration and motion estimation
NASA Astrophysics Data System (ADS)
Yang, Yafei; Li, Jianguo
2013-01-01
This paper addresses the problem of calibrating the six degrees-of-freedom rigid body transform between a camera and an inertial measurement unit (IMU) while at the same time estimating the 3D motion of a vehicle. A high-fidelity measurement model for the camera and IMU are derived and the estimation algorithm are implemented within the particle filter (PF) framework. Belonging to the class of Monte Carlo sequential methods, the filter uses the unscented Kalman filter (UKF) to generate importance proposal distribution. It can not only avoid the limitation of the UKF which can only apply to Gaussian distribution, but also avoid the limitation of the standard PF which can not include the new measurements. Moreover, the proposed algorithm requires no additional hardware equipment. Simulation results illustrate the ill effects of misalignment on motion estimation and demonstrate accurate estimation of both the calibration parameters and the state of the vehicle.
Particle swarm optimization with recombination and dynamic linkage discovery.
Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung
2007-12-01
In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system. PMID:18179066
Real-Time Flood Forecasting System Using Channel Flow Routing Model with Updating by Particle Filter
NASA Astrophysics Data System (ADS)
Kudo, R.; Chikamori, H.; Nagai, A.
2008-12-01
A real-time flood forecasting system using channel flow routing model was developed for runoff forecasting at water gauged and ungaged points along river channels. The system is based on a flood runoff model composed of upstream part models, tributary part models and downstream part models. The upstream part models and tributary part models are lumped rainfall-runoff models, and the downstream part models consist of a lumped rainfall-runoff model for hillslopes adjacent to a river channel and a kinematic flow routing model for a river channel. The flow forecast of this model is updated by Particle filtering of the downstream part model as well as by the extended Kalman filtering of the upstream part model and the tributary part models. The Particle filtering is a simple and powerful updating algorithm for non-linear and non-gaussian system, so that it can be easily applied to the downstream part model without complicated linearization. The presented flood runoff model has an advantage in simlecity of updating procedure to the grid-based distributed models, which is because of less number of state variables. This system was applied to the Gono-kawa River Basin in Japan, and flood forecasting accuracy of the system with both Particle filtering and extended Kalman filtering and that of the system with only extended Kalman filtering were compared. In this study, water gauging stations in the objective basin were divided into two types of stations, that is, reference stations and verification stations. Reference stations ware regarded as ordinary water gauging stations and observed data at these stations are used for calibration and updating of the model. Verification stations ware considered as ungaged or arbitrary points and observed data at these stations are used not for calibration nor updating but for only evaluation of forecasting accuracy. The result confirms that Particle filtering of the downstream part model improves forecasting accuracy of runoff at
Optimization of Particle-in-Cell Codes on RISC Processors
NASA Technical Reports Server (NTRS)
Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.
1996-01-01
General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.
NASA Astrophysics Data System (ADS)
Wang, Xuemei; Ni, Wenbo
2016-09-01
For loosely coupled INS/GPS integrated navigation systems with low-cost and low-accuracy microelectromechanical device inertial sensors, in order to obtain enough accuracy, a full-state nonlinear dynamic model rather than a linearized error model is much more preferable. Particle filters are particularly for nonlinear and non-Gaussian situations, but typical bootstrap particle filters (BPFs) and some improved particle filters (IPFs) such as auxiliary particle filters (APFs) and Gaussian particle filters (GPFs) cannot solve the mismatch between the importance function and the likelihood function very well. The predicted particles propagated through inertial navigation equations cannot be scattered with certainty within the effective range of current observation when there are large drift errors of the inertial sensors. Therefore, the current observation cannot play the correction role well and these particle filters are invalid to some extent. The proposed IPF firstly estimates the corresponding state bias errors according to the current observation and then corrects the bias errors of the predicted particles before determining the weights and resampling the particles. Simulations and practical experiments both show that the proposed IPF can effectively solve the mismatch between the importance function and the likelihood function of a BPF and compensate the accumulated errors of INSs very well. It has great robustness in a serious noisy scenario.
Jaeschke, B C; Lind, O C; Bradshaw, C; Salbu, B
2015-01-01
Radioactive particles are aggregates of radioactive atoms that may contain significant activity concentrations. They have been released into the environment from nuclear weapons tests, and from accidents and effluents associated with the nuclear fuel cycle. Aquatic filter-feeders can capture and potentially retain radioactive particles, which could then provide concentrated doses to nearby tissues. This study experimentally investigated the retention and effects of radioactive particles in the blue mussel, Mytilus edulis. Spent fuel particles originating from the Dounreay nuclear establishment, and collected in the field, comprised a U and Al alloy containing fission products such as (137)Cs and (90)Sr/(90)Y. Particles were introduced into mussels in suspension with plankton-food or through implantation in the extrapallial cavity. Of the particles introduced with food, 37% were retained for 70 h, and were found on the siphon or gills, with the notable exception of one particle that was ingested and found in the stomach. Particles not retained seemed to have been actively rejected and expelled by the mussels. The largest and most radioactive particle (estimated dose rate 3.18 ± 0.06 Gyh(-1)) induced a significant increase in Comet tail-DNA %. In one case this particle caused a large white mark (suggesting necrosis) in the mantle tissue with a simultaneous increase in micronucleus frequency observed in the haemolymph collected from the muscle, implying that non-targeted effects of radiation were induced by radiation from the retained particle. White marks found in the tissue were attributed to ionising radiation and physical irritation. The results indicate that current methods used for risk assessment, based upon the absorbed dose equivalent limit and estimating the "no-effect dose" are inadequate for radioactive particle exposures. Knowledge is lacking about the ecological implications of radioactive particles released into the environment, for example potential
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
NASA Astrophysics Data System (ADS)
Aggarwal, Priyanka; Syed, Zainab; El-Sheimy, Naser
2009-05-01
Navigation includes the integration of methodologies and systems for estimating time-varying position, velocity and attitude of moving objects. Navigation incorporating the integrated inertial navigation system (INS) and global positioning system (GPS) generally requires extensive evaluations of nonlinear equations involving double integration. Currently, integrated navigation systems are commonly implemented using the extended Kalman filter (EKF). The EKF assumes a linearized process, measurement models and Gaussian noise distributions. These assumptions are unrealistic for highly nonlinear systems like land vehicle navigation and may cause filter divergence. A particle filter (PF) is developed to enhance integrated INS/GPS system performance as it can easily deal with nonlinearity and non-Gaussian noises. In this paper, a hybrid extended particle filter (HEPF) is developed as an alternative to the well-known EKF to achieve better navigation data accuracy for low-cost microelectromechanical system sensors. The results show that the HEPF performs better than the EKF during GPS outages, especially when simulated outages are located in periods with high vehicle dynamics.
Particle Filters for Real-Time Fault Detection in Planetary Rovers
NASA Technical Reports Server (NTRS)
Dearden, Richard; Clancy, Dan; Koga, Dennis (Technical Monitor)
2001-01-01
Planetary rovers provide a considerable challenge for robotic systems in that they must operate for long periods autonomously, or with relatively little intervention. To achieve this, they need to have on-board fault detection and diagnosis capabilities in order to determine the actual state of the vehicle, and decide what actions are safe to perform. Traditional model-based diagnosis techniques are not suitable for rovers due to the tight coupling between the vehicle's performance and its environment. Hybrid diagnosis using particle filters is presented as an alternative, and its strengths and weakeners are examined. We also present some extensions to particle filters that are designed to make them more suitable for use in diagnosis problems.
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel
2004-01-01
Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.
Wang, Bo; Xiao, Xuan; Xia, Yuanqing; Fu, Mengyin
2013-01-01
Shipboard is not an absolute rigid body. Many factors could cause deformations which lead to large errors of mounted devices, especially for the navigation systems. Such errors should be estimated and compensated effectively, or they will severely reduce the navigation accuracy of the ship. In order to estimate the deformation, an unscented particle filter method for estimation of shipboard deformation based on an inertial measurement unit is presented. In this method, a nonlinear shipboard deformation model is built. Simulations demonstrated the accuracy reduction due to deformation. Then an attitude plus angular rate match mode is proposed as a frame to estimate the shipboard deformation using inertial measurement units. In this frame, for the nonlinearity of the system model, an unscented particle filter method is proposed to estimate and compensate the deformation angles. Simulations show that the proposed method gives accurate and rapid deformation estimations, which can increase navigation accuracy after compensation of deformation. PMID:24248280
Particle filter based visual tracking with multi-cue adaptive fusion
NASA Astrophysics Data System (ADS)
Li, Anping; Jing, Zhongliang; Hu, Shiqiang
2005-06-01
To improve the robustness of visual tracking in complex environments such as: cluttered backgrounds, partial occlusions, similar distraction and pose variations, a novel tracking method based on adaptive fusion and particle filter is proposed in this paper. In this method, the image color and shape cues are adaptively fused to represent the target observation; fuzzy logic is applied to dynamically adjust each cue weight according to its associated reliability in the past frame; particle filter is adopted to deal with non-linear and non-Gaussian problems in visual tracking. The method is demonstrated to be robust to illumination changes, pose variations, partial occlusions, cluttered backgrounds and camera motion for a test image sequence.
Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions
NASA Technical Reports Server (NTRS)
Cohn, S.; Isaacson, E.; Ghil, M.
1981-01-01
The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.
Optimization-based tuning of LPV fault detection filters for civil transport aircraft
NASA Astrophysics Data System (ADS)
Ossmann, D.; Varga, A.
2013-12-01
In this paper, a two-step optimal synthesis approach of robust fault detection (FD) filters for the model based diagnosis of sensor faults for an augmented civil aircraft is suggested. In the first step, a direct analytic synthesis of a linear parameter varying (LPV) FD filter is performed for the open-loop aircraft using an extension of the nullspace based synthesis method to LPV systems. In the second step, a multiobjective optimization problem is solved for the optimal tuning of the LPV detector parameters to ensure satisfactory FD performance for the augmented nonlinear closed-loop aircraft. Worst-case global search has been employed to assess the robustness of the fault detection system in the presence of aerodynamics uncertainties and estimation errors in the aircraft parameters. An application of the proposed method is presented for the detection of failures in the angle-of-attack sensor.
Decoupled Control Strategy of Grid Interactive Inverter System with Optimal LCL Filter Design
NASA Astrophysics Data System (ADS)
Babu, B. Chitti; Anurag, Anup; Sowmya, Tontepu; Marandi, Debati; Bal, Satarupa
2013-09-01
This article presents a control strategy for a three-phase grid interactive voltage source inverter that links a renewable energy source to the utility grid through a LCL-type filter. An optimized LCL-type filter has been designed and modeled so as to reduce the current harmonics in the grid, considering the conduction and switching losses at constant modulation index (Ma). The control strategy adopted here decouples the active and reactive power loops, thus achieving desirable performance with independent control of active and reactive power injected into the grid. The startup transients can also be controlled by the implementation of this proposed control strategy: in addition to this, optimal LCL filter with lesser conduction and switching copper losses as well as core losses. A trade-off has been made between the total losses in the LCL filter and the Total Harmonic Distortion (THD%) of the grid current, and the filter inductor has been designed accordingly. In order to study the dynamic performance of the system and to confirm the analytical results, the models are simulated in the MATLAB/Simulink environment, and the results are analyzed.
Optimizing experimental parameters for tracking of diffusing particles
NASA Astrophysics Data System (ADS)
Vestergaard, Christian L.
2016-08-01
We describe how a single-particle tracking experiment should be designed in order for its recorded trajectories to contain the most information about a tracked particle's diffusion coefficient. The precision of estimators for the diffusion coefficient is affected by motion blur, limited photon statistics, and the length of recorded time series. We demonstrate for a particle undergoing free diffusion that precision is negligibly affected by motion blur in typical experiments, while optimizing photon counts and the number of recorded frames is the key to precision. Building on these results, we describe for a wide range of experimental scenarios how to choose experimental parameters in order to optimize the precision. Generally, one should choose quantity over quality: experiments should be designed to maximize the number of frames recorded in a time series, even if this means lower information content in individual frames.
An approach to measure trace elements in particles collected on fiber filters using EDXRF.
Oztürk, Fatma; Zararsiz, Abdullah; Kirmaz, Ridvan; Tuncel, Gürdal
2011-01-15
A method developed for analyzes of large number of aerosol samples using Energy Dispersive X-Ray Fluorescence (EDXRF) and its performance were discussed in this manuscript. Atmospheric aerosol samples evaluated in this study were collected on cellulose fiber (Whatman-41) filters, employing a Hi-Vol sampler, at a monitoring station located on the Mediterranean coast of Turkey, between 1993 and 2001. Approximately 1700 samples were collected in this period. Six-hundred of these samples were analyzed by instrumental neutron activation (INAA), and the rest were archived. EDXRF was selected as an analytical technique to analyze 1700 aerosol samples because of its speed and non-destructive nature. However, analysis of aerosol samples collected on fiber filters with a surface technique such as EDXRF was a challenge. Penetration depth calculation performed in this study revealed that EDXRF can obtain information from top 150μm of our fiber filter material. Calibration of the instrument with currently available thin film standards caused unsatisfactory results since the actual penetration depth of particles into fiber filters were much deeper than 150μm. A method was developed in this manuscript to analyze fiber filter samples quickly with XRF. Two hundred samples that were analyzed by INAA were divided into two equal batches. One of these batches was used to calibrate the XRF and the second batch was used for verification. The results showed that developed method can be reliably used for routine analysis of fiber samples loaded with ambient aerosol. PMID:21147325
Kang, B.S-J.; Johnson, E.K.; Rincon, J.
2002-09-19
Hot gas particulate filtration is a basic component in advanced power generation systems such as Integrated Gasification Combined Cycle (IGCC) and Pressurized Fluidized Bed Combustion (PFBC). These systems require effective particulate removal to protect the downstream gas turbine and also to meet environmental emission requirements. The ceramic barrier filter is one of the options for hot gas filtration. Hot gases flow through ceramic candle filters leaving ash deposited on the outer surface of the filter. A process known as surface regeneration removes the deposited ash periodically by using a high pressure back pulse cleaning jet. After this cleaning process has been done there may be some residual ash on the filter surface. This residual ash may grow and this may lead to mechanical failure of the filter. A High Temperature Test Facility (HTTF) was built to investigate the ash characteristics during surface regeneration at high temperatures. The system is capable of conducting surface regeneration tests of a single candle filter at temperatures up to 1500 F. Details of the HTTF apparatus as well as some preliminary test results are presented in this paper. In order to obtain sequential digital images of ash particle distribution during the surface regeneration process, a high resolution, high speed image acquisition system was integrated into the HTTF system. The regeneration pressure and the transient pressure difference between the inside of the candle filter and the chamber during regeneration were measured using a high speed PC data acquisition system. The control variables for the high temperature regeneration tests were (1) face velocity, (2) pressure of the back pulse, and (3) cyclic ash built-up time.
Design Optimization of Vena Cava Filters: An application to dual filtration devices
Singer, M A; Wang, S L; Diachin, D P
2009-12-03
Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-01-01
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-01-01
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247
Wey, Ming-Yen; Chen, Ke-Hao; Liu, Kuang-Yu
2005-05-20
The phenomenon of filtering particles by a fluidized bed is complex and the parameters that affect the control efficiency of filtration have not yet been clarified. The major objective of the study focuses on the effect of characteristics of ash and filter media on filtration efficiency in a fluidized bed. The performance of the fluidized bed for removal of particles in flue gas at various fluidized operating conditions, and then the mechanisms of collecting particles were studied. The evaluated parameters included (1) various ashes (coal ash and incinerator ash); (2) bed material size; (3) operating gas velocity; and (4) bed temperature. The results indicate that the removal efficiency of coal ash increases initially with gas velocity, then decreases gradually as velocity exceeds some specific value. Furthermore, the removal of coal ash enhance with silica sand size decreasing. When the fluidized bed is operated at high temperature, diffusion is a more important mechanism than at room temperature especially for small particles. Although the inertial impaction is the main collection mechanism, the "bounce off" effect when the particles collide with the bed material could reduce the removal efficiency significantly. Because of layer inversion in fluidized bed, the removal efficiency of incinerator ash is decreased with increasing of gas velocity. PMID:15885419
Particles in swimming pool filters--does pH determine the DBP formation?
Hansen, Kamilla M S; Willach, Sarah; Mosbæk, Hans; Andersen, Henrik R
2012-04-01
The formation was investigated for different groups of disinfection byproducts (DBPs) during chlorination of filter particles from swimming pools at different pH-values and the toxicity was estimated. Specifically, the formation of the DBP group trihalomethanes (THMs), which is regulated in many countries, and the non-regulated haloacetic acids (HAAs) and haloacetonitriles (HANs) were investigated at 6.0≤pH≤8.0, under controlled chlorination conditions. The investigated particles were collected from a hot tub with a drum micro filter. In two series of experiments with either constant initial active or initial free chlorine concentrations the particles were chlorinated at different pH-values in the relevant range for swimming pools. THM and HAA formations were reduced by decreasing pH while HAN formation increased with decreasing pH. Based on the organic content the relative DBP formation from the particles was higher than previously reported for body fluid analogue and filling water. The genotoxicity and cytotoxicity estimated from formation of DBPs from the treated particle suspension increased with decreasing pH. Among the quantified DBP groups the HANs were responsible for the majority of the toxicity from the measured DBPs. PMID:22285035
An optimal linear filter for the reduction of noise superimposed to the EEG signal.
Bartoli, F; Cerutti, S
1983-10-01
In the present paper a procedure for the reduction of super-imposed noise on EEG tracings is described, which makes use of linear digital filtering and identification methods. In particular, an optimal filter (a Kalman filter) has been developed which is intended to capture the disturbances of the electromyographic noise on the basis of an a priori modelling which considers a series of impulses with a temporal occurrence according to a Poisson distribution as a noise generating mechanism. The experimental results refer to the EEG tracings recorded from 20 patients in normal resting conditions: the procedure consists of a preprocessing phase (which uses also a low-pass FIR digital filter), followed by the implementation of the identification and the Kalman filter. The performance of the filters is satisfactory also from the clinical standpoint, obtaining a marked reduction of noise without distorting the useful information contained in the signal. Furthermore, when using the introduced method, the EEG signal generating mechanism is accordingly parametrized as AR/ARMA models, thus obtaining an extremely sensitive feature extraction with interesting and not yet completely studied pathophysiological meanings. The above procedure may find a general application in the field of noise reduction and the better enhancement of information contained in the wide set of biological signals. PMID:6632838
Optimal design of 2D digital filters based on neural networks
NASA Astrophysics Data System (ADS)
Wang, Xiao-hua; He, Yi-gang; Zheng, Zhe-zhao; Zhang, Xu-hong
2005-02-01
Two-dimensional (2-D) digital filters are widely useful in image processing and other 2-D digital signal processing fields,but designing 2-D filters is much more difficult than designing one-dimensional (1-D) ones.In this paper, a new design approach for designing linear-phase 2-D digital filters is described,which is based on a new neural networks algorithm (NNA).By using the symmetry of the given 2-D magnitude specification,a compact express for the magnitude response of a linear-phase 2-D finite impulse response (FIR) filter is derived.Consequently,the optimal problem of designing linear-phase 2-D FIR digital filters is turned to approximate the desired 2-D magnitude response by using the compact express.To solve the problem,a new NNA is presented based on minimizing the mean-squared error,and the convergence theorem is presented and proved to ensure the designed 2-D filter stable.Three design examples are also given to illustrate the effectiveness of the NNA-based design approach.
Robust Dead Reckoning System for Mobile Robots Based on Particle Filter and Raw Range Scan
Duan, Zhuohua; Cai, Zixing; Min, Huaqing
2014-01-01
Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318
Fitting complex population models by combining particle filters with Markov chain Monte Carlo.
Knape, Jonas; de Valpine, Perry
2012-02-01
We show how a recent framework combining Markov chain Monte Carlo (MCMC) with particle filters (PFMCMC) may be used to estimate population state-space models. With the purpose of utilizing the strengths of each method, PFMCMC explores hidden states by particle filters, while process and observation parameters are estimated using an MCMC algorithm. PFMCMC is exemplified by analyzing time series data on a red kangaroo (Macropus rufus) population in New South Wales, Australia, using MCMC over model parameters based on an adaptive Metropolis-Hastings algorithm. We fit three population models to these data; a density-dependent logistic diffusion model with environmental variance, an unregulated stochastic exponential growth model, and a random-walk model. Bayes factors and posterior model probabilities show that there is little support for density dependence and that the random-walk model is the most parsimonious model. The particle filter Metropolis-Hastings algorithm is a brute-force method that may be used to fit a range of complex population models. Implementation is straightforward and less involved than standard MCMC for many models, and marginal densities for model selection can be obtained with little additional effort. The cost is mainly computational, resulting in long running times that may be improved by parallelizing the algorithm. PMID:22624307
NASA Astrophysics Data System (ADS)
Sugano, Hiroki; Ochi, Hiroyuki; Nakamura, Yukihiro; Miyamoto, Ryusuke
Recently, many researchers tackle accurate object recognition algorithms and many algorithms are proposed. However, these algorithms have some problems caused by variety of real environments such as a direction change of the object or its shading change. The new tracking algorithm, Cascade Particle Filter, is proposed to fill such demands in real environments by constructing the object model while tracking the objects. We have been investigating to implement accurate object recognition on embedded systems in real-time. In order to apply the Cascade Particle Filter to embedded applications such as surveillance, automotives, and robotics, a hardware accelerator is indispensable because of limitations in power consumption. In this paper we propose a hardware implementation of the Discrete AdaBoost algorithm that is the most computationally intensive part of the Cascade Particle Filter. To implement the proposed hardware, we use PICO Express, a high level synthesis tool provided by Synfora, for rapid prototyping. Implementation result shows that the synthesized hardware has 1, 132, 038 transistors and the die area is 2,195µm × 1,985µm under a 0.180µm library. The simulation result shows that total processing time is about 8.2 milliseconds at 65MHz operation frequency.
NASA Astrophysics Data System (ADS)
Baroncini, F.; Castelli, F.
2009-09-01
Data assimilation techniques based on Ensemble Filtering are widely regarded as the best approach in solving forecast and calibration problems in geophysics models. Often the implementation of statistical optimal techniques, like the Ensemble Kalman Filter, is unfeasible because of the large amount of replicas used in each time step of the model for updating the error covariance matrix. Therefore the sub optimal approach seems to be a more suitable choice. Various sub-optimal techniques were tested in atmospheric and oceanographic models, some of them are based on the detection of a "null space". Distributed Hydrologic Models differ from the other geo-fluid-dynamics models in some fundamental aspects that make complex to understanding the relative efficiency of the different suboptimal techniques. Those aspects include threshold processes , preferential trajectories for convection and diffusion, low observability of the main state variables and high parametric uncertainty. This research study is focused on such topics and explore them through some numerical experiments on an continuous hydrologic model, MOBIDIC. This model include both water mass balance and surface energy balance, so it's able to assimilate a wide variety of datasets like traditional hydrometric "on ground" measurements or land surface temperature retrieval from satellite. The experiments that we present concern to a basin of 700 kmq in center Italy, with hourly dataset on a 8 months period that includes both drought and flood events, in this first set of experiment we worked on a low spatial resolution version of the hydrologic model (3.2 km). A new Kalman Filter based algorithm is presented : this filter try to address the main challenges of hydrological modeling uncertainty. The proposed filter use in Forecast step a COFFEE (Complementary Orthogonal Filter For Efficient Ensembles) approach with a propagation of both deterministic and stochastic ensembles to improve robustness and convergence
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Zheng, Jason Xin; Nguyen, Kayla; He, Yutao
2010-01-01
Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.
Optimizing Stellarators for Energetic Particle Confinement using BEAMS3D
NASA Astrophysics Data System (ADS)
Bolgert, Peter; Drevlak, Michael; Lazerson, Sam; Gates, David; White, Roscoe
2015-11-01
Energetic particle (EP) loss has been called the ``Achilles heel of stellarators,'' (Helander, Rep. Prog. Phys. 77 087001 (2014)) and there is a great need for magnetic configurations with improved EP confinement. In this study we utilize a newly developed capability of the stellarator optimization code STELLOPT: the ability to optimize EP confinement via an interface with guiding center code BEAMS3D (McMillan et al., Plasma Phys. Control. Fusion 56, 095019 (2014)). Using this new tool, optimizations of the W7-X experiment and ARIES-CS reactor are performed where the EP loss fraction is one of many target functions to be minimized. In W7-X, we simulate the experimental NBI system using realistic beam geometry and beam deposition physics. The goal is to find configurations with improved neutral beam deposition and energetic particle confinement. These calculations are compared to previous studies of W7-X NBI deposition. In ARIES-CS, we launch 3.5 MeV alpha particles from a near-axis flux surface using a uniform grid in toroidal and poloidal angle. As these particles are born from D-T reactions, we consider an isotropic distribution in velocity space. This research is supported by DoE Contract Number DE-AC02-09CH11466.
Optimization of monolithic charged-particle sensor arrays
NASA Astrophysics Data System (ADS)
Kleinfelder, Stuart; Li, Shengdong; Chen, Yandong
2007-09-01
Direct-detection CMOS image sensors optimized for charged-particle imaging applications, such as electron microscopy and particle physics, have been designed, fabricated and characterized. These devices directly image charged particles without reliance on image-degrading hybrid technologies such as the use of scintillating materials. Based on standard CMOS Active Pixel Sensor (APS) technology, the sensor arrays use an 8-20 μm thick epitaxial layer that acts as a sensitive region for the generation and collection of ionization electrons resulting from impinging high-energy particles. A range of optimizations to this technology have been developed via simulation and experimental device design. These include the simulation and measurement of charge-collection efficiency vs. recombination, effect of diode area and stray capacitance vs. signal gain and noise, and the effect of different epitaxial silicon depths. Several experimental devices and full-scale prototypes are presented, including two prototypes that systematically and independently vary pixel pitch and diode area, and a complete high-resolution camera for electron microscopy optimized through experiment and simulation. The electron microscope camera has 1×1 k 2 pixels with a 5 μm pixel pitch and an 8 μm epitaxial silicon thickness.
NASA Astrophysics Data System (ADS)
Salazar, Juan P. L. C.; Collins, Lance R.
2012-08-01
In this study, we investigate the effect of "biased sampling," i.e., the clustering of inertial particles in regions of the flow with low vorticity, and "filtering," i.e., the tendency of inertial particles to attenuate the fluid velocity fluctuations, on the probability density function of inertial particle accelerations. In particular, we find that the concept of "biased filtering" introduced by Ayyalasomayajula et al. ["Modeling inertial particle acceleration statistics in isotropic turbulence," Phys. Fluids 20, 0945104 (2008), 10.1063/1.2976174], in which particles filter stronger acceleration events more than weaker ones, is relevant to the higher order moments of acceleration. Flow topology and its connection to acceleration is explored through invariants of the velocity-gradient, strain-rate, and rotation-rate tensors. A semi-quantitative analysis is performed where we assess the contribution of specific flow topologies to acceleration moments. Our findings show that the contributions of regions of high vorticity and low strain decrease significantly with Stokes number, a non-dimensional measure of particle inertia. The contribution from regions of low vorticity and high strain exhibits a peak at a Stokes number of approximately 0.2. Following the methodology of Ooi et al. ["A study of the evolution and characteristics of the invariants of the velocity-gradient tensor in isotropic turbulence," J. Fluid Mech. 381, 141 (1999), 10.1017/S0022112098003681], we compute mean conditional trajectories in planes formed by pairs of tensor invariants in time. Among the interesting findings is the existence of a stable focus in the plane formed by the second invariants of the strain-rate and rotation-rate tensors. Contradicting the results of Ooi et al., we find a stable focus in the plane formed by the second and third invariants of the strain-rate tensor for fluid tracers. We confirm, at an even higher Reynolds number, the conjecture of Collins and Keswani ["Reynolds
NASA Astrophysics Data System (ADS)
Yu, Zhongbo; Liu, Di; Lü, Haishen; Fu, Xiaolei; Xiang, Long; Zhu, Yonghua
2012-12-01
SummaryHybrid data assimilation (DA) is greatly used in recent hydrology and water resources research. In this study, one newly introduced technique, the ensemble particle filter (EnPF), formed by coupling ensemble Kalman filter (EnKF) with particle filter (PF), is applied for a multi-layer soil moisture prediction in the Meilin watershed based on the support vector machines (SVMs). The data used in this paper includes six-layer soil moisture: 0-5 cm, 30 cm, 50 cm, 100 cm, 200 cm and 300 cm and five meteorological parameters: soil temperature at 5 cm and 20 cm, air temperature, relative humidity and solar radiation in the study area. In order to investigate this EnPF approach, another two filters, EnKF and PF are applied as another two data assimilation methods to conduct a comparison. In addition, the SVM model simulated data without updating with data assimilation technique is discussed as well to evaluate the data assimilation technique. Two experimental cases are explored here, one with 200 initial training ensemble members in the SVM training phase while the other with 1000 initial training ensemble members. Three main findings are obtained in this study: (1) the SVMs machine is a statistically sound and robust model for soil moisture prediction in both the surface and root zone layers, and the larger the initial training data ensemble, the more effective the operator derived; (2) data assimilation technique does improve the performance of SVM modeling; (3) EnPF outweighs the performance of other two filters as well as the SVM model; Moreover, the ability of EnPF and PF is not positively related to the resampling ensemble size, when the resampling size exceeds a certain amount, the performance of EnPF and PF would be degraded. Because the EnPF still performs well than EnKF, it can be used as a powerful data assimilation tool in the soil moisture prediction.
Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit
NASA Astrophysics Data System (ADS)
Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping
2006-05-01
Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss.
Design optimization of integrated BiDi triplexer optical filter based on planar lightwave circuit.
Xu, Chenglin; Hong, Xiaobin; Huang, Wei-Ping
2006-05-29
Design optimization of a novel integrated bi-directional (BiDi) triplexer filter based on planar lightwave circuit (PLC) for fiber-to-the premise (FTTP) applications is described. A multi-mode interference (MMI) device is used to filter the up-stream 1310nm signal from the down-stream 1490nm and 1555nm signals. An array waveguide grating (AWG) device performs the dense WDM function by further separating the two down-stream signals. The MMI and AWG are built on the same substrate with monolithic integration. The design is validated by simulation, which shows excellent performance in terms of filter spectral characteristics (e.g., bandwidth, cross-talk, etc.) as well as insertion loss. PMID:19516623
Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang
2016-01-01
Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931
Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang
2016-01-01
Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931
Filter feeders and plankton increase particle encounter rates through flow regime control.
Humphries, Stuart
2009-05-12
Collisions between particles or between particles and other objects are fundamental to many processes that we take for granted. They drive the functioning of aquatic ecosystems, the onset of rain and snow precipitation, and the manufacture of pharmaceuticals, powders and crystals. Here, I show that the traditional assumption that viscosity dominates these situations leads to consistent and large-scale underestimation of encounter rates between particles and of deposition rates on surfaces. Numerical simulations reveal that the encounter rate is Reynolds number dependent and that encounter efficiencies are consistent with the sparse experimental data. This extension of aerosol theory has great implications for understanding of selection pressure on the physiology and ecology of organisms, for example filter feeders able to gather food at rates up to 5 times higher than expected. I provide evidence that filter feeders have been strongly selected to take advantage of this flow regime and show that both the predicted peak concentration and the steady-state concentrations of plankton during blooms are approximately 33% of that predicted by the current models of particle encounter. Many ecological and industrial processes may be operating at substantially greater rates than currently assumed. PMID:19416879
The use of an inert, radioactively labeled microsphere as a measure of particle accumulation (filtration activity) by Mulinia lateralis (Say) and Mytilus edulis L. was evaluated. Bottom sediment plus temperature and salinity of the water were varied to induce changes in filtratio...
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
NASA Astrophysics Data System (ADS)
Shen, Zheqi; Tang, Youmin
2016-04-01
The ensemble Kalman particle filter (EnKPF) is a combination of two Bayesian-based algorithms, namely, the ensemble Kalman filter (EnKF) and the sequential importance resampling particle filter(SIR-PF). It was recently introduced to address non-Gaussian features in data assimilation for highly nonlinear systems, by providing a continuous interpolation between the EnKF and SIR-PF analysis schemes. In this paper, we first extend the EnKPF algorithm by modifying the formula for the computation of the covariancematrix, making it suitable for nonlinear measurement functions (we will call this extended algorithm nEnKPF). Further, a general form of the Kalman gain is introduced to the EnKPF to improve the performance of the nEnKPF when the measurement function is highly nonlinear (this improved algorithm is called mEnKPF). The Lorenz '63 model and Lorenz '96 model are used to test the two modified EnKPF algorithms. The experiments show that the mEnKPF and nEnKPF, given an affordable ensemble size, can perform better than the EnKF for the nonlinear systems with nonlinear observations. These results suggest a promising opportunity to develop a non-Gaussian scheme for realistic numerical models.
Sadaghzadeh N, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin
2014-03-01
Based on a cascaded Kalman-Particle Filtering, gyroscope drift and robot attitude estimation method is proposed in this paper. Due to noisy and erroneous measurements of MEMS gyroscope, it is combined with Photogrammetry based vision navigation scenario. Quaternions kinematics and robot angular velocity dynamics with augmented drift dynamics of gyroscope are employed as system state space model. Nonlinear attitude kinematics, drift and robot angular movement dynamics each in 3 dimensions result in a nonlinear high dimensional system. To reduce the complexity, we propose a decomposition of system to cascaded subsystems and then design separate cascaded observers. This design leads to an easier tuning and more precise debugging from the perspective of programming and such a setting is well suited for a cooperative modular system with noticeably reduced computation time. Kalman Filtering (KF) is employed for the linear and Gaussian subsystem consisting of angular velocity and drift dynamics together with gyroscope measurement. The estimated angular velocity is utilized as input of the second Particle Filtering (PF) based observer in two scenarios of stochastic and deterministic inputs. Simulation results are provided to show the efficiency of the proposed method. Moreover, the experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. PMID:24342270
A particle filter to reconstruct a free-surface flow from a depth camera
NASA Astrophysics Data System (ADS)
Combés, Benoit; Heitz, Dominique; Guibert, Anthony; Mémin, Etienne
2015-10-01
We investigate the combined use of a kinect depth sensor and of a stochastic data assimilation (DA) method to recover free-surface flows. More specifically, we use a weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only. This particle filter accounts for model and observations errors. This DA scheme is enhanced with the use of two observations instead of one classically. We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow. The robustness of the method to depth data errors and also to initial and inflow conditions is considered. We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions. Then, the performance of the Kinect sensor in capturing the temporal sequences of depth observations is investigated. Finally, the efficiency of the algorithm is qualified for a wave in a real rectangular flat bottomed tank. It is shown that for basic initial conditions, the particle filter rapidly and remarkably reconstructs the velocity and height of the free surface flow based on noisy measurements of the elevation alone.
Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization
NASA Technical Reports Server (NTRS)
Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.
1998-01-01
Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built
Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization
NASA Technical Reports Server (NTRS)
Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.
1999-01-01
Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built
Digital restoration of indium-111 and iodine-123 SPECT images with optimized Metz filters
King, M.A.; Schwinger, R.B.; Penney, B.C.; Doherty, P.W.; Bianco, J.A.
1986-08-01
A number of radiopharmaceuticals of great current clinical interest for imaging are labeled with radionuclides that emit medium- to high-energy photons either as their primary radiation, or in low abundance in addition to their primary radiation. The imaging characteristics of these radionuclides result in gamma camera image quality that is inferior to that of /sup 99m/Tc images. Thus, in this investigation /sup 111/In and /sup 123/I contaminated with approximately 4% /sup 124/I were chosen to test the hypothesis that a dramatic improvement in planar and SPECT images may be obtainable with digital image restoration. The count-dependent Metz filter is shown to be able to deconvolve the rapid drop at low spatial frequencies in the imaging system modulation transfer function (MTF) resulting from the acceptance of septal penetration and scatter in the camera window. Use of the Metz filter was found to result in improved spatial resolution as measured by both the full width at half maximum and full width at tenth maximum for both planar and SPECT studies. Two-dimensional, prereconstruction filtering with optimized Metz filters was also determined to improve image contrast, while decreasing the noise level for SPECT studies. A dramatic improvement in image quality was observed with the clinical application of this filter to SPECT imaging.
Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination
NASA Technical Reports Server (NTRS)
Downie, John D.
1992-01-01
Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.
NASA Astrophysics Data System (ADS)
Huang, Haibin; Zhuang, Yufei
2015-08-01
This paper proposes a method that plans energy-optimal trajectories for multi-satellite formation reconfiguration in deep space environment. A novel co-evolutionary particle swarm optimization algorithm is stated to solve the nonlinear programming problem, so that the computational complexity of calculating the gradient information could be avoided. One swarm represents one satellite, and through communication with other swarms during the evolution, collisions between satellites can be avoided. In addition, a dynamic depth first search algorithm is proposed to solve the redundant search problem of a co-evolutionary particle swarm optimization method, with which the computation time can be shorten a lot. In order to make the actual trajectories optimal and collision-free with disturbance, a re-planning strategy is deduced for formation reconfiguration maneuver.
Optimal Pid Tuning for Power System Stabilizers Using Adaptive Particle Swarm Optimization Technique
NASA Astrophysics Data System (ADS)
Oonsivilai, Anant; Marungsri, Boonruang
2008-10-01
An application of the intelligent search technique to find optimal parameters of power system stabilizer (PSS) considering proportional-integral-derivative controller (PID) for a single-machine infinite-bus system is presented. Also, an efficient intelligent search technique, adaptive particle swarm optimization (APSO), is engaged to express usefulness of the intelligent search techniques in tuning of the PID—PSS parameters. Improve damping frequency of system is optimized by minimizing an objective function with adaptive particle swarm optimization. At the same operating point, the PID—PSS parameters are also tuned by the Ziegler-Nichols method. The performance of proposed controller compared to the conventional Ziegler-Nichols PID tuning controller. The results reveal superior effectiveness of the proposed APSO based PID controller.
Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.
McMinn, Brian R
2013-11-01
Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram® filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. PMID:23796954
NASA Astrophysics Data System (ADS)
Kirchstetter, T.; Preble, C.; Dallmann, T. R.; DeMartini, S. J.; Tang, N. W.; Kreisberg, N. M.; Hering, S. V.; Harley, R. A.
2013-12-01
Diesel particle filters have become widely used in the United States since the introduction in 2007 of a more stringent exhaust particulate matter emission standard for new heavy-duty diesel vehicle engines. California has instituted additional regulations requiring retrofit or replacement of older in-use engines to accelerate emission reductions and air quality improvements. This presentation summarizes pollutant emission changes measured over several field campaigns at the Port of Oakland in the San Francisco Bay Area associated with diesel particulate filter use and accelerated modernization of the heavy-duty truck fleet. Pollutants in the exhaust plumes of hundreds of heavy-duty trucks en route to the Port were measured in 2009, 2010, 2011, and 2013. Ultrafine particle number, black carbon (BC), nitrogen oxides (NOx), and nitrogen dioxide (NO2) concentrations were measured at a frequency ≤ 1 Hz and normalized to measured carbon dioxide concentrations to quantify fuel-based emission factors (grams of pollutant emitted per kilogram of diesel consumed). The size distribution of particles in truck exhaust plumes was also measured at 1 Hz. In the two most recent campaigns, emissions were linked on a truck-by-truck basis to installed emission control equipment via the matching of transcribed license plates to a Port truck database. Accelerated replacement of older engines with newer engines and retrofit of trucks with diesel particle filters reduced fleet-average emissions of BC and NOx. Preliminary results from the two most recent field campaigns indicate that trucks without diesel particle filters emit 4 times more BC than filter-equipped trucks. Diesel particle filters increase emissions of NO2, however, and filter-equipped trucks have NO2/NOx ratios that are 4 to 7 times greater than trucks without filters. Preliminary findings related to particle size distribution indicate that (a) most trucks emitted particles characterized by a single mode of approximately
Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing
2015-01-01
An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate. PMID:26064085
Considerations in identifying optimal particles for radiation medicine.
Slater, James M
2006-04-01
Of the many ionizing particles discovered so far, only a few are reasonable to consider for radiation therapy. These include photons, protons, neutrons, electrons, mesons, antiprotons, and ions heavier than hydrogen. Most of these particles are used therapeutically to destroy or inactivate malignant and sometimes benign cells. Since the late 1930s, accelerators have been developed that have expanded radiation oncologists' abilities to produce various ionizing particle beams. Over the past decade, radiation oncologists have become increasingly interested in pursuing particles other than the conventional photons that have been used almost exclusively since X-rays were discovered in 1895. Physicians recognize that normal-tissue morbidity from all forms of anti-cancer treatment is the primary factor limiting the success of those treatments. In radiation therapy, all particles mentioned above can destroy any cancer cell; controlling the beam in three dimensions, thus providing the physician with the capability of avoiding normal-tissue injury, is the fundamental deficiency in the use of X-rays (photons). Heavy charged particles possess near-ideal characteristics for exercising control in three dimensions; their primary differences are due to the number of protons contained within their nuclei. As their number of protons increase (atomic number) their ionization density (LET) increases. In selecting the optimal particle for therapy from among the heavy charged particles, one must carefully consider the ionization density created by each specific particle. Ionization density creates both advantages and disadvantages for patient treatment; these factors must be matched with the patients' precise clinical needs. The current state of the art involves studying the clinical advantages and disadvantages of the lightest ion, the proton, as compared to other particles used or contemplated for use. Full analysis must await adequate data developed from long-term studies to
Parallel global optimization with the particle swarm algorithm
Schutte, J. F.; Reinbolt, J. A.; Fregly, B. J.; Haftka, R. T.; George, A. D.
2007-01-01
SUMMARY Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima—large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that (1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available. PMID:17891226
Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy
NASA Astrophysics Data System (ADS)
Wang, Ke; Qiu, Ping
2015-05-01
Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration.
Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy.
Wang, Ke; Qiu, Ping
2015-05-01
Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration. PMID:25950644
Optimal color filter array design: quantitative conditions and an efficient search procedure
NASA Astrophysics Data System (ADS)
Lu, Yue M.; Vetterli, Martin
2009-01-01
Most digital cameras employ a spatial subsampling process, implemented as a color filter array (CFA), to capture color images. The choice of CFA patterns has a great impact on the performance of subsequent reconstruction (demosaicking) algorithms. In this work, we propose a quantitative theory for optimal CFA design. We view the CFA sampling process as an encoding (low-dimensional approximation) operation and, correspondingly, demosaicking as the best decoding (reconstruction) operation. Finding the optimal CFA is thus equivalent to finding the optimal approximation scheme for the original signals with minimum information loss. We present several quantitative conditions for optimal CFA design, and propose an efficient computational procedure to search for the best CFAs that satisfy these conditions. Numerical experiments show that the optimal CFA patterns designed from the proposed procedure can effectively retain the information of the original full-color images. In particular, with the designed CFA patterns, high quality demosaicking can be achieved by using simple and efficient linear filtering operations in the polyphase domain. The visual qualities of the reconstructed images are competitive to those obtained by the state-of-the-art adaptive demosaicking algorithms based on the Bayer pattern.
Optimizing Magnetite Nanoparticles for Mass Sensitivity in Magnetic Particle Imaging
Ferguson, R Matthew; Minard, Kevin R; Khandhar, Amit P; Krishnan, Kannan M
2011-03-01
Purpose: Magnetic particle imaging (MPI), using magnetite nanoparticles (MNPs) as tracer material, shows great promise as a platform for fast tomographic imaging. To date, the magnetic properties of MNPs used in imaging have not been optimized. As nanoparticle magnetism shows strong size dependence, we explore how varying MNP size impacts imaging performance in order to determine optimal MNP characteristics for MPI at any driving field frequency f_{0}. Methods: Monodisperse MNPs of varying size were synthesized and their magnetic properties characterized. Their MPI response was measured experimentally, at an arbitrarily chosen f_{0} = 250 kHz, using a custom-built MPI transceiver designed to detect the third harmonic of MNP magnetization. Results were interpreted using a model of dynamic MNP magnetization that is based on the Langevin theory of superparamagnetism and accounts for sample size distribution, and size-dependent magnetic relaxation. Results: Our experimental results show clear variation in the MPI signal intensity as a function of MNP size that is in good agreement with modeled results. A maxima in the plot of MPI signal vs. MNP size indicates there is a particular size that is optimal for the chosen frequency of 250 kHz. Conclusions: For MPI at any chosen frequency, there will exist a characteristic particle size that generates maximum signal amplitude. We illustrate this at 250 kHz with particles of 15 nm core diameter.
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms
Garro, Beatriz A.; Vázquez, Roberto A.
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.
Garro, Beatriz A; Vázquez, Roberto A
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132
Optimal estimation of diffusion coefficients from single-particle trajectories
NASA Astrophysics Data System (ADS)
Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik
2014-02-01
How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.
A geometry-based particle filtering approach to white matter tractography.
Savadjiev, Peter; Rathi, Yogesh; Malcolm, James G; Shenton, Martha E; Westin, Carl-Fredrik
2010-01-01
We introduce a fibre tractography framework based on a particle filter which estimates a local geometrical model of the underlying white matter tract, formulated as a 'streamline flow' using generalized helicoids. The method is not dependent on the diffusion model, and is applicable to diffusion tensor (DT) data as well as to high angular resolution reconstructions. The geometrical model allows for a robust inference of local tract geometry, which, in the context of the causal filter estimation, guides tractography through regions with partial volume effects. We validate the method on synthetic data and present results on two types in vivo data: diffusion tensors and a spherical harmonic reconstruction of the fibre orientation distribution function (fODF). PMID:20879320
NASA Astrophysics Data System (ADS)
Zhao, X. F.; Huang, S. X.
2012-08-01
This paper addresses the problem of estimating range-varying parameters of the height-dependent refractivity over the sea surface from radar sea clutter. In the forward simulation, the split-step Fourier parabolic equation (PE) is used to compute the radar clutter power in the complex refractive environments. Making use of the inherent Markovian structure of the split-step Fourier PE solution, the refractivity from clutter (RFC) problem is formulated within a nonlinear recursive Bayesian state estimation framework. Particle filter (PF) that is a technique for implementing a recursive Bayesian filter by Monte Carlo simulations is used to track range-varying characteristics of the refractivity profiles. Basic ideas of employing PF to solve RFC problem are introduced. Both simulation and real data results are presented to check up the feasibility of PF-RFC performances.
NASA Astrophysics Data System (ADS)
Zhao, X. F.; Huang, S. X.; Wang, D. X.
2012-11-01
This paper addresses the problem of estimating range-varying parameters of the height-dependent refractivity over the sea surface from radar sea clutter. In the forward simulation, the split-step Fourier parabolic equation (PE) is used to compute the radar clutter power in the complex refractive environments. Making use of the inherent Markovian structure of the split-step Fourier PE solution, the refractivity from clutter (RFC) problem is formulated within a nonlinear recursive Bayesian state estimation framework. Particle filter (PF), which is a technique for implementing a recursive Bayesian filter by Monte Carlo simulations, is used to track range-varying characteristics of the refractivity profiles. Basic ideas of employing PF to solve RFC problem are introduced. Both simulation and real data results are presented to confirm the feasibility of PF-RFC performances.
Nanodosimetry-Based Plan Optimization for Particle Therapy
Casiraghi, Margherita; Schulte, Reinhard W.
2015-01-01
Treatment planning for particle therapy is currently an active field of research due uncertainty in how to modify physical dose in order to create a uniform biological dose response in the target. A novel treatment plan optimization strategy based on measurable nanodosimetric quantities rather than biophysical models is proposed in this work. Simplified proton and carbon treatment plans were simulated in a water phantom to investigate the optimization feasibility. Track structures of the mixed radiation field produced at different depths in the target volume were simulated with Geant4-DNA and nanodosimetric descriptors were calculated. The fluences of the treatment field pencil beams were optimized in order to create a mixed field with equal nanodosimetric descriptors at each of the multiple positions in spread-out particle Bragg peaks. For both proton and carbon ion plans, a uniform spatial distribution of nanodosimetric descriptors could be obtained by optimizing opposing-field but not single-field plans. The results obtained indicate that uniform nanodosimetrically weighted plans, which may also be radiobiologically uniform, can be obtained with this approach. Future investigations need to demonstrate that this approach is also feasible for more complicated beam arrangements and that it leads to biologically uniform response in tumor cells and tissues. PMID:26167202
Nanodosimetry-Based Plan Optimization for Particle Therapy.
Casiraghi, Margherita; Schulte, Reinhard W
2015-01-01
Treatment planning for particle therapy is currently an active field of research due uncertainty in how to modify physical dose in order to create a uniform biological dose response in the target. A novel treatment plan optimization strategy based on measurable nanodosimetric quantities rather than biophysical models is proposed in this work. Simplified proton and carbon treatment plans were simulated in a water phantom to investigate the optimization feasibility. Track structures of the mixed radiation field produced at different depths in the target volume were simulated with Geant4-DNA and nanodosimetric descriptors were calculated. The fluences of the treatment field pencil beams were optimized in order to create a mixed field with equal nanodosimetric descriptors at each of the multiple positions in spread-out particle Bragg peaks. For both proton and carbon ion plans, a uniform spatial distribution of nanodosimetric descriptors could be obtained by optimizing opposing-field but not single-field plans. The results obtained indicate that uniform nanodosimetrically weighted plans, which may also be radiobiologically uniform, can be obtained with this approach. Future investigations need to demonstrate that this approach is also feasible for more complicated beam arrangements and that it leads to biologically uniform response in tumor cells and tissues. PMID:26167202
Deso, Steven E; Idakoji, Ibrahim A; Muelly, Michael C; Kuo, William T
2016-06-01
Owing to a myriad of inferior vena cava (IVC) filter types and their potential complications, rapid and correct identification may be challenging when encountered on routine imaging. The authors aimed to develop an interactive mobile application that allows recognition of all IVC filters and related complications, to optimize the care of patients with indwelling IVC filters. The FDA Premarket Notification Database was queried from 1980 to 2014 to identify all IVC filter types in the United States. An electronic search was then performed on MEDLINE and the FDA MAUDE database to identify all reported complications associated with each device. High-resolution photos were taken of each filter type and corresponding computed tomographic and fluoroscopic images were obtained from an institutional review board-approved IVC filter registry. A wireframe and storyboard were created, and software was developed using HTML5/CSS compliant code. The software was deployed using PhoneGap (Adobe, San Jose, CA), and the prototype was tested and refined. Twenty-three IVC filter types were identified for inclusion. Safety data from FDA MAUDE and 72 relevant peer-reviewed studies were acquired, and complication rates for each filter type were highlighted in the application. Digital photos, fluoroscopic images, and CT DICOM files were seamlessly incorporated. All data were succinctly organized electronically, and the software was successfully deployed into Android (Google, Mountain View, CA) and iOS (Apple, Cupertino, CA) platforms. A powerful electronic mobile application was successfully created to allow rapid identification of all IVC filter types and related complications. This application may be used to optimize the care of patients with IVC filters. PMID:27247483
Modified particle filtering algorithm for single acoustic vector sensor DOA tracking.
Li, Xinbo; Sun, Haixin; Jiang, Liangxu; Shi, Yaowu; Wu, Yue
2015-01-01
The conventional direction of arrival (DOA) estimation algorithm with static sources assumption usually estimates the source angles of two adjacent moments independently and the correlation of the moments is not considered. In this article, we focus on the DOA estimation of moving sources and a modified particle filtering (MPF) algorithm is proposed with state space model of single acoustic vector sensor. Although the particle filtering (PF) algorithm has been introduced for acoustic vector sensor applications, it is not suitable for the case that one dimension angle of source is estimated with large deviation, the two dimension angles (pitch angle and azimuth angle) cannot be simultaneously employed to update the state through resampling processing of PF algorithm. To solve the problems mentioned above, the MPF algorithm is proposed in which the state estimation of previous moment is introduced to the particle sampling of present moment to improve the importance function. Moreover, the independent relationship of pitch angle and azimuth angle is considered and the two dimension angles are sampled and evaluated, respectively. Then, the MUSIC spectrum function is used as the "likehood" function of the MPF algorithm, and the modified PF-MUSIC (MPF-MUSIC) algorithm is proposed to improve the root mean square error (RMSE) and the probability of convergence. The theoretical analysis and the simulation results validate the effectiveness and feasibility of the two proposed algorithms. PMID:26501280
A localized particle filter for data assimilation in high-dimensional geophysical models.
NASA Astrophysics Data System (ADS)
Poterjoy, Jonathan; Anderon, Jeffrey
2016-04-01
This talk introduces an ensemble data assimilation approach based on the particle filter (PF) that has potential for nonlinear/non-Gaussian applications in geoscience. PFs make no assumptions regarding prior and posterior error distributions, allowing them to perform well for most applications provided with a sufficiently large number of particles. The proposed method is similar to the PF in that ensemble realizations of the model state are weighted based on the likelihood of observations to approximate posterior probabilities of the system state. The new approach, denoted the local PF, reduces the influence of distant observations on the weight calculations via a localization function. Unlike standard PFs, the local PF provides accurate results using ensemble sizes small enough to be affordable for large models. Comparisons of the local PF and ensemble Kalman filters using a simplified atmospheric general circulation model (with 25 particles) demonstrate that the new method is a viable data assimilation technique for large geophysical systems. The local PF also shows substantial benefits over the EnKF when observation networks consist of measurements that relate nonlinearly to the model state - analogous to remotely sensed data used frequently in atmospheric analyses.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Modified Particle Filtering Algorithm for Single Acoustic Vector Sensor DOA Tracking
Li, Xinbo; Sun, Haixin; Jiang, Liangxu; Shi, Yaowu; Wu, Yue
2015-01-01
The conventional direction of arrival (DOA) estimation algorithm with static sources assumption usually estimates the source angles of two adjacent moments independently and the correlation of the moments is not considered. In this article, we focus on the DOA estimation of moving sources and a modified particle filtering (MPF) algorithm is proposed with state space model of single acoustic vector sensor. Although the particle filtering (PF) algorithm has been introduced for acoustic vector sensor applications, it is not suitable for the case that one dimension angle of source is estimated with large deviation, the two dimension angles (pitch angle and azimuth angle) cannot be simultaneously employed to update the state through resampling processing of PF algorithm. To solve the problems mentioned above, the MPF algorithm is proposed in which the state estimation of previous moment is introduced to the particle sampling of present moment to improve the importance function. Moreover, the independent relationship of pitch angle and azimuth angle is considered and the two dimension angles are sampled and evaluated, respectively. Then, the MUSIC spectrum function is used as the “likehood” function of the MPF algorithm, and the modified PF-MUSIC (MPF-MUSIC) algorithm is proposed to improve the root mean square error (RMSE) and the probability of convergence. The theoretical analysis and the simulation results validate the effectiveness and feasibility of the two proposed algorithms. PMID:26501280
Particle capture processes and evaporation on a microscopic scale in wet filters.
Mullins, Benjamin J; Braddock, Roger D; Agranovski, Igor E
2004-11-01
This paper details results of an experimental study of the capture of solid and liquid aerosols on fibrous filters wetted with water. A microscopic cell containing a single fibre (made from a variety of materials) was observed via a microscope, with a high speed CCD camera used to dynamically image the interactions between liquid droplets, zeolite and PSL particles and fibres. Variable quantities of liquid irrigation were used, and the possibility for subsequent fibre regeneration after clogging or drying was also studied. It was found that drainage of the wetting liquid (water) from the fibres occurred, even at very low irrigation rates when the droplet consisted almost completely of captured particles. It was also found that the fibre was rapidly loaded with captured particles when the irrigation was not supplied. However, almost complete regeneration (removal of the collected cake) by the liquid droplets occurred shortly after recommencement of the water supply. The study also examined the capture of oily liquid aerosols on fibres wetted with water. A predominance of the barrel shaped droplet on the fibre was observed, with oil droplets displacing water droplets (if the oil and fibre combination created a barrel shaped droplet), creating various compound droplets of oil and water not previously reported in literature. This preferential droplet shape implies that whatever the initial substance wetting a filter, a substance with a greater preferential adherence to the fibre will displace the former one. PMID:15380432
NASA Astrophysics Data System (ADS)
Mao, Jiandong; Li, Jinxuan
2015-10-01
Particle size distribution is essential for describing direct and indirect radiation of aerosols. Because the relationship between the aerosol size distribution and optical thickness (AOT) is an ill-posed Fredholm integral equation of the first type, the traditional techniques for determining such size distributions, such as the Phillips-Twomey regularization method, are often ambiguous. Here, we use an approach based on an improved particle swarm optimization algorithm (IPSO) to retrieve aerosol size distribution. Using AOT data measured by a CE318 sun photometer in Yinchuan, we compared the aerosol size distributions retrieved using a simple genetic algorithm, a basic particle swarm optimization algorithm and the IPSO. Aerosol size distributions for different weather conditions were analyzed, including sunny, dusty and hazy conditions. Our results show that the IPSO-based inversion method retrieved aerosol size distributions under all weather conditions, showing great potential for similar size distribution inversions.
Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng
2009-10-01
The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.
Dynamic omni-directional vision localization using a beacon tracker based on particle filter
NASA Astrophysics Data System (ADS)
Cao, Zuoliang; Liu, Shiyu
2007-09-01
Omni-directional vision navigation for AGVs appears definite significant since its advantage of panoramic sight with a single compact visual scene. This unique guidance technique involves target recognition, vision tracking, object positioning, path programming. An algorithm for omni-vision based global localization which utilizes two overhead features as beacon pattern is proposed in this paper. An approach for geometric restoration of omni-vision images has to be considered since an inherent distortion exists. The mapping between image coordinates and physical space parameters of the targets can be obtained by means of the imaging principle on the fisheye lens. The localization of the robot can be achieved by geometric computation. Dynamic localization employs a beacon tracker to follow the landmarks in real time during the arbitrary movement of the vehicle. The coordinate transformation is devised for path programming based on time sequence images analysis. The beacon recognition and tracking are a key procedure for an omni-vision guided mobile unit. The conventional image processing such as shape decomposition, description, matching and other usually employed technique are not directly applicable in omni-vision. Particle filter (PF) has been shown to be successful for several nonlinear estimation problems. A beacon tracker based on Particle Filter which offers a probabilistic framework for dynamic state estimation in visual tracking has been developed. We independently use two Particle Filters to track double landmarks but a composite algorithm on multiple objects tracking conducts for vehicle localization. We have implemented the tracking and localization system and demonstrated the relevant of the algorithm.
Heuristic optimization of the scanning path of particle therapy beams
Pardo, J.; Donetti, M.; Bourhaleb, F.; Ansarinejad, A.; Attili, A.; Cirio, R.; Garella, M. A.; Giordanengo, S.; Givehchi, N.; La Rosa, A.; Marchetto, F.; Monaco, V.; Pecka, A.; Peroni, C.; Russo, G.; Sacchi, R.
2009-06-15
Quasidiscrete scanning is a delivery strategy for proton and ion beam therapy in which the beam is turned off when a slice is finished and a new energy must be set but not during the scanning between consecutive spots. Different scanning paths lead to different dose distributions due to the contribution of the unintended transit dose between spots. In this work an algorithm to optimize the scanning path for quasidiscrete scanned beams is presented. The classical simulated annealing algorithm is used. It is a heuristic algorithm frequently used in combinatorial optimization problems, which allows us to obtain nearly optimal solutions in acceptable running times. A study focused on the best choice of operational parameters on which the algorithm performance depends is presented. The convergence properties of the algorithm have been further improved by using the next-neighbor algorithm to generate the starting paths. Scanning paths for two clinical treatments have been optimized. The optimized paths are found to be shorter than the back-and-forth, top-to-bottom (zigzag) paths generally provided by the treatment planning systems. The gamma method has been applied to quantify the improvement achieved on the dose distribution. Results show a reduction of the transit dose when the optimized paths are used. The benefit is clear especially when the fluence per spot is low, as in the case of repainting. The minimization of the transit dose can potentially allow the use of higher beam intensities, thus decreasing the treatment time. The algorithm implemented for this work can optimize efficiently the scanning path of quasidiscrete scanned particle beams. Optimized scanning paths decrease the transit dose and lead to better dose distributions.
Ruiz-Cruz, Riemann; Sanchez, Edgar N; Ornelas-Tellez, Fernando; Loukianov, Alexander G; Harley, Ronald G
2013-12-01
In this paper, the authors propose a particle swarm optimization (PSO) for a discrete-time inverse optimal control scheme of a doubly fed induction generator (DFIG). For the inverse optimal scheme, a control Lyapunov function (CLF) is proposed to obtain an inverse optimal control law in order to achieve trajectory tracking. A posteriori, it is established that this control law minimizes a meaningful cost function. The CLFs depend on matrix selection in order to achieve the control objectives; this matrix is determined by two mechanisms: initially, fixed parameters are proposed for this matrix by a trial-and-error method and then by using the PSO algorithm. The inverse optimal control scheme is illustrated via simulations for the DFIG, including the comparison between both mechanisms. PMID:24273145
Jin, Junchen
2016-01-01
The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998
Wang, Jiaxi; Lin, Boliang; Jin, Junchen
2016-01-01
The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998
Actin filament tracking based on particle filters and stretching open active contour models.
Li, Hongsheng; Shen, Tian; Vavylonis, Dimitrios; Huang, Xiaolei
2009-01-01
We introduce a novel algorithm for actin filament tracking and elongation measurement. Particle Filters (PF) and Stretching Open Active Contours (SOAC) work cooperatively to simplify the modeling of PF in a one-dimensional state space while naturally integrating filament body constraints to tip estimation. Our algorithm reduces the PF state spaces to one-dimensional spaces by tracking filament bodies using SOAC and probabilistically estimating tip locations along the curve length of SOACs. Experimental evaluation on TIRFM image sequences with very low SNRs demonstrates the accuracy and robustness of this approach. PMID:20426170
Indoor patient position estimation using particle filtering and wireless body area networks.
Ren, Hongliang; Meng, Max Q H; Xu, Lisheng
2007-01-01
Wireless Body Area Network (WBAN) has been recently promoted to monitor the physiological parameters of patient in an unobtrusive and natural way. This paper towards to make advantage of those ongoing wireless communication links between the body sensors to provide estimated position information of patients or particular body area networks, which make daily activity surveillance possible for further analysis. The proposed particle filtering based localization algorithm just picks up the received radio signal strength information from beacons or its neighbors to infer its own pose, which do not require additional hardware or instruments. Theoretical analysis and simulation experiments are presented to examine the performance of location estimating method. PMID:18002445
State to State and Charged Particle Kinetic Modeling of Time Filtering and Cs Addition
Capitelli, M.; Gorse, C.; Longo, S.; Diomede, P.; Pagano, D.
2007-08-10
We present here an account on the progress of kinetic simulation of non equilibrium plasmas in conditions of interest for negative ion production by using the 1D Bari code for hydrogen plasma simulation. The model includes the state to state kinetics of the vibrational level population of hydrogen molecules, plus a PIC/MCC module for the multispecies dynamics of charged particles. In particular we present new results for the modeling of two issues of great interest: the time filtering and the Cs addition via surface coverage.
Particle swarm optimization of ascent trajectories of multistage launch vehicles
NASA Astrophysics Data System (ADS)
Pontani, Mauro
2014-02-01
Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state
Optimal design of a generalized compound eye particle detector array
NASA Astrophysics Data System (ADS)
Nehorai, Arye; Liu, Zhi; Paldi, Eytan
2006-05-01
We analyze the performance of a novel detector array for detecting and localizing particle emitting sources. The array is spherically shaped and consists of multiple "eyelets," each having a conical shape with a lens on top and a particle detectors subarray inside. The array's configuration is inspired by and generalizes the biological compound eye: it has a global spherical shape and allows a large number of detectors in each eyelet. The array can be used to detect particles including photons (e.g. visible light, X or γ rays), electrons, protons, neutrons, or α particles. We analyze the performance of the array by computing statistical Cramer-Rao bounds on the errors in estimating the direction of arrival (DOA) of the incident particles. In numerical examples, we first show the influence of the array parameters on its performance bound on the mean-square angular error (MSAE). Then we optimize the array's configuration according to a min-max criterion, i.e. minimize the worst case lower bound of the MSAE. Finally we introduce two estimators of the source direction using the proposed array and analyze their performance, thereby showing that the performance bound is attainable in practice. Potential applications include artificial vision, astronomy, and security.
Characterization of exhaled breath particles collected by an electret filter technique.
Tinglev, Åsa Danielsson; Ullah, Shahid; Ljungkvist, Göran; Viklund, Emilia; Olin, Anna-Carin; Beck, Olof
2016-06-01
Aerosol particles that are present in exhaled breath carry nonvolatile components and have gained interest as a specimen for potential biomarkers. Nonvolatile compounds detected in exhaled breath include both endogenous and exogenous compounds. The aim of this study was to study particles collected with a new, simple and convenient filter technique. Samples of breath were collected from healthy volunteers from approximately 30 l of exhaled air. Particles were counted with an optical particle counter and two phosphatidylcholines were measured by liquid chromatography-tandem mass spectrometry. In addition, phosphatidylcholines and methadone was analysed in breath from patients in treatment with methadone and oral fluid was collected with the Quantisal device. The results demonstrated that the majority of particles are <1 μm in size and that the fraction of larger particle contributes most to the total mass. The phosphatidylcholine PC(16 : 0/16 : 0) dominated over PC(16 : 0/18 : 1) and represented a major constituent of the particles. The concentration of the PC(16 : 0/16 : 0) homolog was significantly correlated (p < 0.001) with total mass. From the low concentration of the two phosphatidylcholines and their relative abundance in oral fluid a major contribution from the oral cavity could be ruled out. The concentration of PC(16 : 0/16 : 0) in breath was positively correlated with age (p < 0.01). An attempt to use PC(16 : 0/16 : 0) as a sample size indicator for methadone was not successful, as the large intra-individual variability between samplings even increased after normalization. In conclusion, it was demonstrated that exhaled breath sampled with the filter device represents a specimen corresponding to surfactant. The possible use of PC(16 : 0/16 : 0) as a sample size indicator was supported and deserves further investigations. We propose that the direct and selective collection of the breath aerosol particles is a promising strategy
Augmented Lagrangian Particle Swarm Optimization in Mechanism Design
NASA Astrophysics Data System (ADS)
Sedlaczek, Kai; Eberhard, Peter
The problem of optimizing nonlinear multibody systems is in general nonlinear and nonconvex. This is especially true for the dimensional synthesis process of rigid body mechanisms, where often only local solutions might be found with gradient-based optimization methods. An attractive alternative for solving such multimodal optimization problems is the Particle Swarm Optimization (PSO) algorithm. This stochastic solution technique allows a derivative-free search for a global solution without the need for any initial design. In this work, we present an extension to the basic PSO algorithm in order to solve the problem of dimensional synthesis with nonlinear equality and inequality constraints. It utilizes the Augmented Lagrange Multiplier Method in combination with an advanced non-stationary penalty function approach that does not rely on excessively large penalty factors for sufficiently accurate solutions. Although the PSO method is even able to solve nonsmooth and discrete problems, this augmented algorithm can additionally calculate accurate Lagrange multiplier estimates for differentiable formulations, which are helpful in the analysis process of the optimization results. We demonstrate this method and show its very promising applicability to the constrained dimensional synthesis process of rigid body mechanisms.
Particle Swarm and Ant Colony Approaches in Multiobjective Optimization
NASA Astrophysics Data System (ADS)
Rao, S. S.
2010-10-01
The social behavior of groups of birds, ants, insects and fish has been used to develop evolutionary algorithms known as swarm intelligence techniques for solving optimization problems. This work presents the development of strategies for the application of two of the popular swarm intelligence techniques, namely the particle swarm and ant colony methods, for the solution of multiobjective optimization problems. In a multiobjective optimization problem, the objectives exhibit a conflicting nature and hence no design vector can minimize all the objectives simultaneously. The concept of Pareto-optimal solution is used in finding a compromise solution. A modified cooperative game theory approach, in which each objective is associated with a different player, is used in this work. The applicability and computational efficiencies of the proposed techniques are demonstrated through several illustrative examples involving unconstrained and constrained problems with single and multiple objectives and continuous and mixed design variables. The present methodologies are expected to be useful for the solution of a variety of practical continuous and mixed optimization problems involving single or multiple objectives with or without constraints.
Particle swarm optimization for the clustering of wireless sensors
NASA Astrophysics Data System (ADS)
Tillett, Jason C.; Rao, Raghuveer M.; Sahin, Ferat; Rao, T. M.
2003-07-01
Clustering is necessary for data aggregation, hierarchical routing, optimizing sleep patterns, election of extremal sensors, optimizing coverage and resource allocation, reuse of frequency bands and codes, and conserving energy. Optimal clustering is typically an NP-hard problem. Solutions to NP-hard problems involve searches through vast spaces of possible solutions. Evolutionary algorithms have been applied successfully to a variety of NP-hard problems. We explore one such approach, Particle Swarm Optimization (PSO), an evolutionary programming technique where a 'swarm' of test solutions, analogous to a natural swarm of bees, ants or termites, is allowed to interact and cooperate to find the best solution to the given problem. We use the PSO approach to cluster sensors in a sensor network. The energy efficiency of our clustering in a data-aggregation type sensor network deployment is tested using a modified LEACH-C code. The PSO technique with a recursive bisection algorithm is tested against random search and simulated annealing; the PSO technique is shown to be robust. We further investigate developing a distributed version of the PSO algorithm for clustering optimally a wireless sensor network.
Segmentation of Nerve Bundles and Ganglia in Spine MRI Using Particle Filters
Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina
2011-01-01
Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bézier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741
Segmentation of nerve bundles and ganglia in spine MRI using particle filters.
Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina
2011-01-01
Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bézier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Heeb, Norbert V; Rey, Maria Dolores; Zennegg, Markus; Haag, Regula; Wichser, Adrian; Schmid, Peter; Seiler, Cornelia; Honegger, Peter; Zeyer, Kerstin; Mohn, Joachim; Bürki, Samuel; Zimmerli, Yan; Czerwinski, Jan; Mayer, Andreas
2015-08-01
Iron-catalyzed diesel particle filters (DPFs) are widely used for particle abatement. Active catalyst particles, so-called fuel-borne catalysts (FBCs), are formed in situ, in the engine, when combusting precursors, which were premixed with the fuel. The obtained iron oxide particles catalyze soot oxidation in filters. Iron-catalyzed DPFs are considered as safe with respect to their potential to form polychlorinated dibenzodioxins/furans (PCDD/Fs). We reported that a bimetallic potassium/iron FBC supported an intense PCDD/F formation in a DPF. Here, we discuss the impact of fatty acid methyl ester (FAME) biofuel on PCDD/F emissions. The iron-catalyzed DPF indeed supported a PCDD/F formation with biofuel but remained inactive with petroleum-derived diesel fuel. PCDD/F emissions (I-TEQ) increased 23-fold when comparing biofuel and diesel data. Emissions of 2,3,7,8-TCDD, the most toxic congener [toxicity equivalence factor (TEF) = 1.0], increased 90-fold, and those of 2,3,7,8-TCDF (TEF = 0.1) increased 170-fold. Congener patterns also changed, indicating a preferential formation of tetra- and penta-chlorodibenzofurans. Thus, an inactive iron-catalyzed DPF becomes active, supporting a PCDD/F formation, when operated with biofuel containing impurities of potassium. Alkali metals are inherent constituents of biofuels. According to the current European Union (EU) legislation, levels of 5 μg/g are accepted. We conclude that risks for a secondary PCDD/F formation in iron-catalyzed DPFs increase when combusting potassium-containing biofuels. PMID:26176879
Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we me...
Spatial join optimization among WFSs based on recursive partitioning and filtering rate estimation
NASA Astrophysics Data System (ADS)
Lan, Guiwen; Wu, Congcong; Shi, Guangyi; Chen, Qi; Yang, Zhao
2015-12-01
Spatial join among Web Feature Services (WFS) is time-consuming for most of non-candidate spatial objects may be encoded by GML and transferred to client side. In this paper, an optimization strategy is proposed to enhance performance of these joins by filtering non-candidate spatial objects as many as possible. By recursive partitioning, the data skew of sub-areas is facilitated to reduce data transmission using spatial semi-join. Moreover filtering rate is used to determine whether a spatial semi-join for a sub-area is profitable and choose a suitable execution plan for it. The experimental results show that the proposed strategy is feasible under most circumstances.
NASA Astrophysics Data System (ADS)
Ikeda, Takeshi; Kawamoto, Mitsuru; Sashima, Akio; Suzuki, Keiji; Kurumatani, Koichi
In the field of the ubiquitous computing, positioning systems which can provide users' location information have paid attention as an important technical element which can be applied to various services, for example, indoor navigation services, evacuation services, market research services, guidance services, and so on. A lot of researchers have proposed various outdoor and indoor positioning systems. In this paper, we deal with indoor positioning systems. Many conventional indoor positioning systems use expensive infrastructures, because the propagated times of radio waves are used to measure users' positions with high accuracy. In this paper, we propose an indoor autonomous positioning system using radio signal strengths (RSSs) based on ISM band communications. In order to estimate users' positions, the proposed system utilizes a particle filter that is one of the Monte Carlo methods. Because the RSS information is used in the proposed system, the equipments configuring the system are not expensive compared with the conventional indoor positioning systems and it can be installed easily. Moreover, because the particle filter is used to estimate user's position, even if the RSS fluctuates due to, for example, multi-paths, the system can carry out position estimation robustly. We install the proposed system in one floor of a building and carry out some experiments in order to verify the validity of the proposed system. As a result, we confirmed that the average of the estimation errors of the proposed system was about 1.8 m, where the result is enough accuracy for achieving the services mentioned above.
Improving Hydrologic Data Assimilation by a Multivariate Particle Filter-Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Yan, H.; DeChant, C. M.; Moradkhani, H.
2014-12-01
Data assimilation (DA) is a popular method for merging information from multiple sources (i.e. models and remotely sensing), leading to improved hydrologic prediction. With the increasing availability of satellite observations (such as soil moisture) in recent years, DA is emerging in operational forecast systems. Although these techniques have seen widespread application, developmental research has continued to further refine their effectiveness. This presentation will examine potential improvements to the Particle Filter (PF) through the inclusion of multivariate correlation structures. Applications of the PF typically rely on univariate DA schemes (such as assimilating the outlet observed discharge), and multivariate schemes generally ignore the spatial correlation of the observations. In this study, a multivariate DA scheme is proposed by introducing geostatistics into the newly developed particle filter with Markov chain Monte Carlo (PF-MCMC) method. This new method is assessed by a case study over one of the basin with natural hydrologic process in Model Parameter Estimation Experiment (MOPEX), located in Arizona. The multivariate PF-MCMC method is used to assimilate the Advanced Scatterometer (ASCAT) grid (12.5 km) soil moisture retrievals and the observed streamflow in five gages (four inlet and one outlet gages) into the Sacramento Soil Moisture Accounting (SAC-SMA) model for the same scale (12.5 km), leading to greater skill in hydrologic predictions.
Streamflow data assimilation for the mesoscale hydrologic model (mHM) using particle filtering
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis; Choi, Shin-woo
2015-04-01
Data assimilation has been becoming popular to increase the certainty of the hydrologic prediction considering various sources of uncertainty through the hydrologic modeling chain. In this study, we develop a data assimilation framework for the mesoscale hydrologic model (mHM 5.2, http://www.ufz.de/mhm) using particle filtering, which is a sequential DA method for non-linear and non-Gaussian models. The mHM is a grid based distributed model that is based on numerical approximations of dominant hydrologic processes having similarity with the HBV and VIC models. The developed DA framework for the mHM represents simulation uncertainty by model ensembles and updates spatial distributions of model state variables when new observations are available in each updating time interval. The evaluation of the proposed method is carried out within several large European basins via assimilating multiple streamflow measurements in a daily interval. Dimensional limitations of particle filtering is resolved by effective noise specification methods, which uses spatial and temporal correlation of weather forcing data to represent model structural uncertainty. The presentation will be focused on gains and limitations of streamflow data assimilation in several hindcasting experiments. In addition, impacts of non-Gaussian distributions of state variables on model performance will be discussed.
Ma, Huan; Shen, Henggen; Shui, Tiantian; Li, Qing; Zhou, Liuke
2016-01-01
Size- and time-dependent aerodynamic behaviors of indoor particles, including PM1.0, were evaluated in a school office in order to test the performance of air-cleaning devices using different filters. In-situ real-time measurements were taken using an optical particle counter. The filtration characteristics of filter media, including single-pass efficiency, volume and effectiveness, were evaluated and analyzed. The electret filter (EE) medium shows better initial removal efficiency than the high efficiency (HE) medium in the 0.3-3.5 μm particle size range, while under the same face velocity, the filtration resistance of the HE medium is several times higher than that of the EE medium. During service life testing, the efficiency of the EE medium decreased to 60% with a total purifying air flow of 25 × 10⁴ m³/m². The resistance curve rose slightly before the efficiency reached the bottom, and then increased almost exponentially. The single-pass efficiency of portable air cleaner (PAC) with the pre-filter (PR) or the active carbon granule filter (CF) was relatively poor. While PAC with the pre-filter and the high efficiency filter (PR&HE) showed maximum single-pass efficiency for PM1.0 (88.6%), PAC with the HE was the most effective at removing PM1.0. The enhancement of PR with HE and electret filters augmented the single-pass efficiency, but lessened the airflow rate and effectiveness. Combined with PR, the decay constant of large-sized particles could be greater than for PACs without PR. Without regard to the lifetime, the electret filters performed better with respect to resource saving and purification improvement. A most penetrating particle size range (MPPS: 0.4-0.65 μm) exists in both HE and electret filters; the MPPS tends to become larger after HE and electret filters are combined with PR. These results serve to provide a better understanding of the indoor particle removal performance of PACs when combined with different kinds of filters in school
Ma, Huan; Shen, Henggen; Shui, Tiantian; Li, Qing; Zhou, Liuke
2016-01-01
Size- and time-dependent aerodynamic behaviors of indoor particles, including PM1.0, were evaluated in a school office in order to test the performance of air-cleaning devices using different filters. In-situ real-time measurements were taken using an optical particle counter. The filtration characteristics of filter media, including single-pass efficiency, volume and effectiveness, were evaluated and analyzed. The electret filter (EE) medium shows better initial removal efficiency than the high efficiency (HE) medium in the 0.3–3.5 μm particle size range, while under the same face velocity, the filtration resistance of the HE medium is several times higher than that of the EE medium. During service life testing, the efficiency of the EE medium decreased to 60% with a total purifying air flow of 25 × 104 m3/m2. The resistance curve rose slightly before the efficiency reached the bottom, and then increased almost exponentially. The single-pass efficiency of portable air cleaner (PAC) with the pre-filter (PR) or the active carbon granule filter (CF) was relatively poor. While PAC with the pre-filter and the high efficiency filter (PR&HE) showed maximum single-pass efficiency for PM1.0 (88.6%), PAC with the HE was the most effective at removing PM1.0. The enhancement of PR with HE and electret filters augmented the single-pass efficiency, but lessened the airflow rate and effectiveness. Combined with PR, the decay constant of large-sized particles could be greater than for PACs without PR. Without regard to the lifetime, the electret filters performed better with respect to resource saving and purification improvement. A most penetrating particle size range (MPPS: 0.4–0.65 μm) exists in both HE and electret filters; the MPPS tends to become larger after HE and electret filters are combined with PR. These results serve to provide a better understanding of the indoor particle removal performance of PACs when combined with different kinds of filters in school
NASA Astrophysics Data System (ADS)
Shao, Gui-Fang; Wang, Ting-Na; Liu, Tun-Dong; Chen, Jun-Ren; Zheng, Ji-Wen; Wen, Yu-Hua
2015-01-01
Pt-Pd alloy nanoparticles, as potential catalyst candidates for new-energy resources such as fuel cells and lithium ion batteries owing to their excellent reactivity and selectivity, have aroused growing attention in the past years. Since structure determines physical and chemical properties of nanoparticles, the development of a reliable method for searching the stable structures of Pt-Pd alloy nanoparticles has become of increasing importance to exploring the origination of their properties. In this article, we have employed the particle swarm optimization algorithm to investigate the stable structures of alloy nanoparticles with fixed shape and atomic proportion. An improved discrete particle swarm optimization algorithm has been proposed and the corresponding scheme has been presented. Subsequently, the swap operator and swap sequence have been applied to reduce the probability of premature convergence to the local optima. Furthermore, the parameters of the exchange probability and the 'particle' size have also been considered in this article. Finally, tetrahexahedral Pt-Pd alloy nanoparticles has been used to test the effectiveness of the proposed method. The calculated results verify that the improved particle swarm optimization algorithm has superior convergence and stability compared with the traditional one.
Generalized Particle Swarm Algorithm for HCR Gearing Geometry Optimization
NASA Astrophysics Data System (ADS)
Kuzmanović, Siniša; Vereš, Miroslav; Rackov, Milan
2012-12-01
NASA Astrophysics Data System (ADS)
Colecchia, Federico
2014-03-01
Low-energy strong interactions are a major source of background at hadron colliders, and methods of subtracting the associated energy flow are well established in the field. Traditional approaches treat the contamination as diffuse, and estimate background energy levels either by averaging over large data sets or by restricting to given kinematic regions inside individual collision events. On the other hand, more recent techniques take into account the discrete nature of background, most notably by exploiting the presence of substructure inside hard jets, i.e. inside collections of particles originating from scattered hard quarks and gluons. However, none of the existing methods subtract background at the level of individual particles inside events. We illustrate the use of an algorithm that will allow particle-by-particle background discrimination at the Large Hadron Collider, and we envisage this as the basis for a novel event filtering procedure upstream of the official reconstruction chains. Our hope is that this new technique will improve physics analysis when used in combination with state-of-the-art algorithms in high-luminosity hadron collider environments.
NASA Astrophysics Data System (ADS)
Kiani, Maryam; Pourtakdoust, Seid H.
2014-12-01
A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.
Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka
2013-01-01
Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383
Optimization of nanoparticle core size for magnetic particle imaging
Ferguson, Matthew R.; Minard, Kevin R.; Krishnan, Kannan M.
2009-05-01
Magnetic Particle Imaging (MPI) is a powerful new diagnostic visualization platform designed for measuring the amount and location of superparamagnetic nanoscale molecular probes (NMPs) in biological tissues. Promising initial results indicate that MPI can be extremely sensitive and fast, with good spatial resolution for imaging human patients or live animals. Here, we present modeling results that show how MPI sensitivity and spatial resolution both depend on NMP-core physical properties, and how MPI performance can be effectively optimized through rational core design. Monodisperse magnetite cores are attractive since they are readily produced with a biocompatible coating and controllable size that facilitates quantitative imaging.
GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS
Rogers, Adam; Fiege, Jason D.
2011-02-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image {chi}{sup 2} and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest {chi}{sup 2} is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
Optimal discrete-time H∞/γ0 filtering and control under unknown covariances
NASA Astrophysics Data System (ADS)
Kogan, Mark M.
2016-04-01
New stochastic γ0 and mixed H∞/γ0 filtering and control problems for discrete-time systems under completely unknown covariances are introduced and solved. The performance measure γ0 is the worst-case steady-state averaged variance of the error signal in response to the stationary Gaussian white zero-mean disturbance with unknown covariance and identity variance. The performance measure H∞/γ0 is the worst-case power norm of the error signal in response to two input disturbances in different channels, one of which is the deterministic signal with a bounded energy and the other is the stationary Gaussian white zero-mean signal with a bounded variance provided the weighting sum of disturbance powers equals one. In this framework, it is possible to consider at the same time both deterministic and stochastic disturbances highlighting their mutual effects. Our main results provide the complete characterisations of the above performance measures in terms of linear matrix inequalities and therefore both the γ0 and H∞/γ0 optimal filters and controllers can be computed by convex programming. H∞/γ0 optimal solution is shown to be actually a trade-off between optimal solutions to the H∞ and γ0 problems for the corresponding channels.
What is Particle Swarm optimization? Application to hydrogeophysics (Invited)
NASA Astrophysics Data System (ADS)
Fernández Martïnez, J.; García Gonzalo, E.; Mukerji, T.
2009-12-01
Inverse problems are generally ill-posed. This yields lack of uniqueness and/or numerical instabilities. These features cause local optimization methods without prior information to provide unpredictable results, not being able to discriminate among the multiple models consistent with the end criteria. Stochastic approaches to inverse problems consist in shifting attention to the probability of existence of certain interesting subsurface structures instead of "looking for a unique model". Some well-known stochastic methods include genetic algorithms and simulated annealing. A more recent method, Particle Swarm Optimization, is a global optimization technique that has been successfully applied to solve inverse problems in many engineering fields, although its use in geosciences is still limited. Like all stochastic methods, PSO requires reasonably fast forward modeling. The basic idea behind PSO is that each model searches the model space according to its misfit history and the misfit of the other models of the swarm. PSO algorithm can be physically interpreted as a damped spring-mass system. This physical analogy was used to define a whole family of PSO optimizers and to establish criteria, based on the stability of particle swarm trajectories, to tune the PSO parameters: inertia, local and global accelerations. In this contribution we show application to different low-cost hydrogeophysical inverse problems: 1) a salt water intrusion problem using Vertical Electrical Soundings, 2) the inversion of Spontaneous Potential data for groundwater modeling, 3) the identification of Cole-Cole parameters for Induced Polarization data. We show that with this stochastic approach we are able to answer questions related to risk analysis, such as what is the depth of the salt intrusion with a certain probability, or giving probabilistic bounds for the water table depth. Moreover, these measures of uncertainty are obtained with small computational cost and time, allowing us a very
A challenge for theranostics: is the optimal particle for therapy also optimal for diagnostics?
NASA Astrophysics Data System (ADS)
Dreifuss, Tamar; Betzer, Oshra; Shilo, Malka; Popovtzer, Aron; Motiei, Menachem; Popovtzer, Rachela
2015-09-01
Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications.Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an
Optimized model of oriented-line-target detection using vertical and horizontal filters
NASA Astrophysics Data System (ADS)
Westland, Stephen; Foster, David H.
1995-08-01
A line-element target differing sufficiently in orientation from a background of line elements can be visually detected easily and quickly; orientation thresholds for such detection are lowest when the background elements are all vertical or all horizontal. A simple quantitative model of this performance was constructed from two classes of anisotropic filters, (2) nonlinear point transformation, and (3) estimation of a signal-to-noise ratio based on responses to images with and without a target. A Monte Carlo optimization procedure (simulated annealing) was used to determine the model parameter values required for providing an accurate description of psychophysical data on orientation increment thresholds.
OPTIMIZATION OF COAL PARTICLE FLOW PATTERNS IN LOW NOX BURNERS
Jost O.L. Wendt; Gregory E. Ogden; Jennifer Sinclair; Caner Yurteri
2001-08-20
The proposed research is directed at evaluating the effect of flame aerodynamics on NO{sub x} emissions from coal fired burners in a systematic manner. This fundamental research includes both experimental and modeling efforts being performed at the University of Arizona in collaboration with Purdue University. The objective of this effort is to develop rational design tools for optimizing low NO{sub x} burners to the kinetic emissions limit (below 0.2 lb./MMBTU). Experimental studies include both cold and hot flow evaluations of the following parameters: flame holder geometry, secondary air swirl, primary and secondary inlet air velocity, coal concentration in the primary air and coal particle size distribution. Hot flow experiments will also evaluate the effect of wall temperature on burner performance. Cold flow studies will be conducted with surrogate particles as well as pulverized coal. The cold flow furnace will be similar in size and geometry to the hot-flow furnace but will be designed to use a laser Doppler velocimeter/phase Doppler particle size analyzer. The results of these studies will be used to predict particle trajectories in the hot-flow furnace as well as to estimate the effect of flame holder geometry on furnace flow field. The hot-flow experiments will be conducted in a novel near-flame down-flow pulverized coal furnace. The furnace will be equipped with externally heated walls. Both reactors will be sized to minimize wall effects on particle flow fields. The cold-flow results will be compared with Fluent computation fluid dynamics model predictions and correlated with the hot-flow results with the overall goal of providing insight for novel low NO{sub x} burner geometry's.
Optimal reconstruction of reaction rates from particle distributions
NASA Astrophysics Data System (ADS)
Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier
2010-05-01
Random walk particle tracking methodologies to simulate solute transport of conservative species constitute an attractive alternative for their computational efficiency and absence of numerical dispersion. Yet, problems stemming from the reconstruction of concentrations from particle distributions have typically prevented its use in reactive transport problems. The numerical problem mainly arises from the need to first reconstruct the concentrations of species/components from a discrete number of particles, which is an error prone process, and then computing a spatial functional of the concentrations and/or its derivatives (either spatial or temporal). Errors are then propagated, so that common strategies to reconstruct this functional require an unfeasible amount of particles when dealing with nonlinear reactive transport problems. In this context, this article presents a methodology to directly reconstruct this functional based on kernel density estimators. The methodology mitigates the error propagation in the evaluation of the functional by avoiding the prior estimation of the actual concentrations of species. The multivariate kernel associated with the corresponding functional depends on the size of the support volume, which defines the area over which a given particle can influence the functional. The shape of the kernel functions and the size of the support volume determines the degree of smoothing, which is optimized to obtain the best unbiased predictor of the functional using an iterative plug-in support volume selector. We applied the methodology to directly reconstruct the reaction rates of a precipitation/dissolution problem involving the mixing of two different waters carrying two aqueous species in chemical equilibrium and moving through a randomly heterogeneous porous medium.
Evaluation of a Particle Swarm Algorithm For Biomechanical Optimization
Schutte, Jaco F.; Koh, Byung; Reinbolt, Jeffrey A.; Haftka, Raphael T.; George, Alan D.; Fregly, Benjamin J.
2006-01-01
Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently-developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm’s global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms—a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units. PMID:16060353
Adaptive Resampling Particle Filters for GPS Carrier-Phase Navigation and Collision Avoidance System
NASA Astrophysics Data System (ADS)
Hwang, Soon Sik
This dissertation addresses three problems: 1) adaptive resampling technique (ART) for Particle Filters, 2) precise relative positioning using Global Positioning System (GPS) Carrier-Phase (CP) measurements applied to nonlinear integer resolution problem for GPS CP navigation using Particle Filters, and 3) collision detection system based on GPS CP broadcasts. First, Monte Carlo filters, called Particle Filters (PF), are widely used where the system is non-linear and non-Gaussian. In real-time applications, their estimation accuracies and efficiencies are significantly affected by the number of particles and the scheduling of relocating weights and samples, the so-called resampling step. In this dissertation, the appropriate number of particles is estimated adaptively such that the error of the sample mean and variance stay in bounds. These bounds are given by the confidence interval of a normal probability distribution for a multi-variate state. Two required number of samples maintaining the mean and variance error within the bounds are derived. The time of resampling is determined when the required sample number for the variance error crosses the required sample number for the mean error. Second, the PF using GPS CP measurements with adaptive resampling is applied to precise relative navigation between two GPS antennas. In order to make use of CP measurements for navigation, the unknown number of cycles between GPS antennas, the so called integer ambiguity, should be resolved. The PF is applied to this integer ambiguity resolution problem where the relative navigation states estimation involves nonlinear observations and nonlinear dynamics equation. Using the PF, the probability density function of the states is estimated by sampling from the position and velocity space and the integer ambiguities are resolved without using the usual hypothesis tests to search for the integer ambiguity. The ART manages the number of position samples and the frequency of the
NASA Astrophysics Data System (ADS)
Pekşen, Ertan; Yas, Türker; Kıyak, Alper
2014-09-01
We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.
Energetic optimization of ion conduction rate by the K+ selectivity filter
NASA Astrophysics Data System (ADS)
Morais-Cabral, João H.; Zhou, Yufeng; MacKinnon, Roderick
2001-11-01
The K+ selectivity filter catalyses the dehydration, transfer and rehydration of a K+ ion in about ten nanoseconds. This physical process is central to the production of electrical signals in biology. Here we show how nearly diffusion-limited rates are achieved, by analysing ion conduction and the corresponding crystallographic ion distribution in the selectivity filter of the KcsA K+ channel. Measurements with K+ and its slightly larger analogue, Rb+, lead us to conclude that the selectivity filter usually contains two K+ ions separated by one water molecule. The two ions move in a concerted fashion between two configurations, K+-water-K+-water (1,3 configuration) and water-K+-water-K+ (2,4 configuration), until a third ion enters, displacing the ion on the opposite side of the queue. For K+, the energy difference between the 1,3 and 2,4 configurations is close to zero, the condition of maximum conduction rate. The energetic balance between these configurations is a clear example of evolutionary optimization of protein function.
Modified patch-based locally optimal Wiener method for interferometric SAR phase filtering
NASA Astrophysics Data System (ADS)
Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing
2016-04-01
This paper presents a modified patch-based locally optimal Wiener (PLOW) method for interferometric synthetic aperture radar (InSAR) phase filtering. PLOW is a linear minimum mean squared error (LMMSE) estimator based on a Gaussian additive noise condition. It jointly estimates moments, including mean and covariance, using a non-local technique. By using similarities between image patches, this method can effectively filter noise while preserving details. When applied to InSAR phase filtering, three modifications are proposed based on spatial variant noise. First, pixels are adaptively clustered according to their coherence magnitudes. Second, rather than a global estimator, a locally adaptive estimator is used to estimate noise covariance. Third, using the coherence magnitudes as weights, the mean of each cluster is estimated, using a weighted mean to further reduce noise. The performance of the proposed method is experimentally verified using simulated and real data. The results of our study demonstrate that the proposed method is on par or better than the non-local interferometric SAR (NL-InSAR) method.
NASA Astrophysics Data System (ADS)
Yokozawa, M.; Sakurai, G.; Iizumi, T.
2010-12-01
The climatological sensitivities of crop yields to changes in mean temperature and precipitation during a period of the growing season were statistically examined. The sensitivity is defined as the change of yield in response to the change of climatic condition in the growth period from sowing to harvesting. The objective crops are maize and soybean, which are being cultivated in United States, Brazil and China as the world major production countries. We collected the yield data of maize and soybean on county level of United States from USDA during a period of 1980-2006, on Município level of Brazil during a period of 1990-2006 and on Xiàn level of China during a period of 1980-2005. While the data on only four provinces in China are used (Heilongjiang, Henan, Liaoning, and Shandong), total production of the four provinces reaches about 40% (maize) and 51% (soybean) to the country total (USDA 1997). We used JRA-25 reanalysis climate data distributed from the Japanese Meteorological Agency during a period of 1980 through 2006 with a resolution of 1.125° in latitude and longitude. To coincide in resolution, the crop yield data were reallocated into the same grids as climate. To eliminate economical and technical effects on yield, we detrended the time series data of yield and climate. We applied a local regression model to conduct the detrend (cubic weighting and M estimator of Tukey's bi-weight function). The time series data on the deviation from the trend were examined with the changes in temperature and precipitation for each grid using the particle filter. The particle filter used here is based on self-organizing state-space model. As a result, in the northern hemisphere, positive sensitivity, i.e. increase in temperature shifts the crop yield positively, is generally found especially in higher latitude, while negative sensitivity is found in the lower latitude. The neutral sensitivity is found in the regions where the mean temperature during growing season
Stamoulis, Catherine; Betensky, Rebecca A
2016-01-01
We aim to improve the performance of the previously proposed signal decomposition matched filtering (SDMF) method [26] for the detection of copy-number variations (CNV) in the human genome. Through simulations, we show that the modified SDMF is robust even at high noise levels and outperforms the original SDMF method, which indirectly depends on CNV frequency. Simulations are also used to develop a systematic approach for selecting relevant parameter thresholds in order to optimize sensitivity, specificity and computational efficiency. We apply the modified method to array CGH data from normal samples in the cancer genome atlas (TCGA) and compare detected CNVs to those estimated using circular binary segmentation (CBS) [19], a hidden Markov model (HMM)-based approach [11] and a subset of CNVs in the Database of Genomic Variants. We show that a substantial number of previously identified CNVs are detected by the optimized SDMF, which also outperforms the other two methods. PMID:27295643
Numerical experiments with an implicit particle filter for the shallow water equations
NASA Astrophysics Data System (ADS)
Souopgui, I.; Chorin, A. J.; Hussaini, M.
2012-12-01
of the state space. In our numerical experiments, we varied the availability of the data (in both space and time) as well as the variance of the observation noise. We found that the implicit particle filter is reliable and efficient in all scenarios we considered. The implicit sampling method could improve the accuracy of the traditional variational approach. Moreover, we obtain quantitative measures of the uncertainty of the state estimate ``for free,'' while no information about the uncertainty is easily available using the traditional 4D-Var method only.
Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.
2015-07-01
The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.
Regular silicon pillars and dichroic filters produced via particle-imprinted membranes
NASA Astrophysics Data System (ADS)
Ladenburger, Andreas; Reiser, Anton; Konle, Johannes; Feneberg, Martin; Sauer, Rolf; Thonke, Klaus; Yan, Feng; Goedel, Werner A.
2007-02-01
We have produced regular silicon pillar arrays and porous gold films on the 100 nm scale without any optical or e-beam lithography. Using particle-assisted wetting we produced a nanoporous polymer membrane on silicon. The membrane incorporated a regular array of pores generated by embedding silica particles in an organic liquid and subsequently removing the particles after polymerization of the liquid. Gold vapor was deposited onto the silicon wafer coated by the porous polymer structure. This process created an array of gold dots on the substrate at the bottom of the pores, and at the same time, a sievelike porous gold layer on top of the polymer matrix. The top layer was lifted off and used as an optical short-pass filter. After removal of the polymer membrane, the remaining gold dot pattern on the substrate served as a mask in a deep reactive ion etching process. We obtain large-area arrays of silicon nanopillars up to 1.5 μm in height and below 200 nm in diameter.
Particle swarm optimization with scale-free interactions.
Liu, Chen; Du, Wen-Bo; Wang, Wen-Xu
2014-01-01
The particle swarm optimization (PSO) algorithm, in which individuals collaborate with their interacted neighbors like bird flocking to search for the optima, has been successfully applied in a wide range of fields pertaining to searching and convergence. Here we employ the scale-free network to represent the inter-individual interactions in the population, named SF-PSO. In contrast to the traditional PSO with fully-connected topology or regular topology, the scale-free topology used in SF-PSO incorporates the diversity of individuals in searching and information dissemination ability, leading to a quite different optimization process. Systematic results with respect to several standard test functions demonstrate that SF-PSO gives rise to a better balance between the convergence speed and the optimum quality, accounting for its much better performance than that of the traditional PSO algorithms. We further explore the dynamical searching process microscopically, finding that the cooperation of hub nodes and non-hub nodes play a crucial role in optimizing the convergence process. Our work may have implications in computational intelligence and complex networks. PMID:24859007
Particle Swarm Optimization with Scale-Free Interactions
Liu, Chen; Du, Wen-Bo; Wang, Wen-Xu
2014-01-01
The particle swarm optimization (PSO) algorithm, in which individuals collaborate with their interacted neighbors like bird flocking to search for the optima, has been successfully applied in a wide range of fields pertaining to searching and convergence. Here we employ the scale-free network to represent the inter-individual interactions in the population, named SF-PSO. In contrast to the traditional PSO with fully-connected topology or regular topology, the scale-free topology used in SF-PSO incorporates the diversity of individuals in searching and information dissemination ability, leading to a quite different optimization process. Systematic results with respect to several standard test functions demonstrate that SF-PSO gives rise to a better balance between the convergence speed and the optimum quality, accounting for its much better performance than that of the traditional PSO algorithms. We further explore the dynamical searching process microscopically, finding that the cooperation of hub nodes and non-hub nodes play a crucial role in optimizing the convergence process. Our work may have implications in computational intelligence and complex networks. PMID:24859007
NASA Astrophysics Data System (ADS)
Yu, Miao; Liu, Cunjia; Chen, Wen-hua; Chambers, Jonathon
2014-06-01
In this work, we propose a new ground moving target indicator (GMTI) radar based ground vehicle tracking method which exploits domain knowledge. Multiple state models are considered and a Monte-Carlo sampling based algorithm is preferred due to the manoeuvring of the ground vehicle and the non-linearity of the GMTI measurement model. Unlike the commonly used algorithms such as the interacting multiple model particle filter (IMMPF) and bootstrap multiple model particle filter (BS-MMPF), we propose a new algorithm integrating the more efficient auxiliary particle filter (APF) into a Bayesian framework. Moreover, since the movement of the ground vehicle is likely to be constrained by the road, this information is taken as the domain knowledge and applied together with the tracking algorithm for improving the tracking performance. Simulations are presented to show the advantages of both the new algorithm and incorporation of the road information by evaluating the root mean square error (RMSE).
A Framework for 3D Model-Based Visual Tracking Using a GPU-Accelerated Particle Filter.
Brown, J A; Capson, D W
2012-01-01
A novel framework for acceleration of particle filtering approaches to 3D model-based, markerless visual tracking in monocular video is described. Specifically, we present a methodology for partitioning and mapping the computationally expensive weight-update stage of a particle filter to a graphics processing unit (GPU) to achieve particle- and pixel-level parallelism. Nvidia CUDA and Direct3D are employed to harness the massively parallel computational power of modern GPUs for simulation (3D model rendering) and evaluation (segmentation, feature extraction, and weight calculation) of hundreds of particles at high speeds. The proposed framework addresses the computational intensity that is intrinsic to all particle filter approaches, including those that have been modified to minimize the number of particles required for a particular task. Performance and tracking quality results for rigid object and articulated hand tracking experiments demonstrate markerless, model-based visual tracking on consumer-grade graphics hardware with pixel-level accuracy up to 95 percent at 60+ frames per second. The framework accelerates particle evaluation up to 49 times over a comparable CPU-only implementation, providing an increased particle count while maintaining real-time frame rates. PMID:21301027
NASA Astrophysics Data System (ADS)
Yun, Youngsun; Kim, Doyoon; Kee, Changdon
For more accurate and reliable aviation navigation systems which can be used for civil and military aircraft or missiles, researchers have employed various filtering methods to reduce the measurement noise level, or to integrate sensors such as global navigation satellite system/inertial navigation system (GNSS/INS) integration. Most GNSS applications including Differential GNSS assume that the GNSS measurement error follows a Gaussian distribution, but this is not true. Therefore, we propose an integrity monitoring method using particle filters assuming non-Gaussian measurement error. The performance of our method was contrasted with that of conventional Kalman filter methods with an assumed Gaussian error. Since the Kalman filters presume that measurement error follows a Gaussian distribution, they use an overbounded standard deviation to represent the measurement error distribution, and since the overbound standard deviations are too conservative compared to actual deviations, this degrades the integrity monitoring performance of the filters. A simulation was performed to show the improvement in performance provided by our proposed particle filter method, which does not use sigma overbounding. The results show that our method can detect about 20% smaller measurement biases and reduce the protection level by 30% versus the Kalman filter method based on an overbound sigma, which motivates us to use an actual error model instead of overbounding, or to improve the overbounding methods.
Towards Optimal Filtering on ARM for ATLAS Tile Calorimeter Front-End Processing
NASA Astrophysics Data System (ADS)
Cox, Mitchell A.
2015-10-01
The Large Hadron Collider at CERN generates enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 40 Tb/s. Advanced and characteristically expensive Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) are currently used to process this quantity of data. It is proposed that a cost- effective, high data throughput Processing Unit (PU) can be developed by using several ARM System on Chips in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. ARM is a cost effective and energy efficient alternative CPU architecture to the long established x86 architecture. This PU could be used for a variety of high-level algorithms on the high data throughput raw data. An Optimal Filtering algorithm has been implemented in C++ and several ARM platforms have been tested. Optimal Filtering is currently used in the ATLAS Tile Calorimeter front-end for basic energy reconstruction and is currently implemented on DSPs.
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Actin Filament Tracking Based on Particle Filters and Stretching Open Active Contour Models
Li, Hongsheng; Shen, Tian; Vavylonis, Dimitrios; Huang, Xiaolei
2010-01-01
We introduce a novel algorithm for actin filament tracking and elongation measurement. Particle Filters (PF) and Stretching Open Active Contours (SOAC) work cooperatively to simplify the modeling of PF in a one-dimensional state space while naturally integrating filament body constraints to tip estimation. Existing microtubule (MT) tracking methods track either MT tips or entire bodies in high-dimensional state spaces. In contrast, our algorithm reduces the PF state spaces to one-dimensional spaces by tracking filament bodies using SOAC and probabilistically estimating tip locations along the curve length of SOACs. Experimental evaluation on TIRFM image sequences with very low SNRs demonstrates the accuracy and robustness of the proposed approach. PMID:20426170
Canedo-Rodriguez, Adrian; Rodriguez, Jose Manuel; Alvarez-Santos, Victor; Iglesias, Roberto; Regueiro, Carlos V.
2015-01-01
In wireless positioning systems, the transmitter's power is usually fixed. In this paper, we explore the use of varying transmission powers to increase the performance of a wireless localization system. To this extent, we have designed a robot positioning system based on wireless motes. Our motes use an inexpensive, low-power sub-1-GHz system-on-chip (CC1110) working in the 433-MHz ISM band. Our localization algorithm is based on a particle filter and infers the robot position by: (1) comparing the power received with the expected one; and (2) integrating the robot displacement. We demonstrate that the use of transmitters that vary their transmission power over time improves the performance of the wireless positioning system significantly, with respect to a system that uses fixed power transmitters. This opens the door for applications where the robot can localize itself actively by requesting the transmitters to change their power in real time. PMID:25942641
Indoor anti-occlusion visible light positioning systems based on particle filtering
NASA Astrophysics Data System (ADS)
Jiang, Meng; Huang, Zhitong; Li, Jianfeng; Zhang, Ruqi; Ji, Yuefeng
2015-04-01
As one of the most popular categories of mobile services, a rapid growth of indoor location-based services has been witnessed over the past decades. Indoor positioning methods based on Wi-Fi, radio-frequency identification or Bluetooth are widely commercialized; however, they have disadvantages such as low accuracy or high cost. An emerging method using visible light is under research recently. The existed visible light positioning (VLP) schemes using carrier allocation, time allocation and multiple receivers all have limitations. This paper presents a novel mechanism using particle filtering in VLP system. By this method no additional devices are needed and the occlusion problem in visible light would be alleviated which will effectively enhance the flexibility for indoor positioning.
An iterative particle filter approach for respiratory motion estimation in nuclear medicine imaging
NASA Astrophysics Data System (ADS)
Abd. Rahni, Ashrani Aizzuddin; Wells, Kevin; Lewis, Emma; Guy, Matthew; Goswami, Budhaditya
2011-03-01
The continual improvement in spatial resolution of Nuclear Medicine (NM) scanners has made accurate compensation of patient motion increasingly important. A major source of corrupting motion in NM acquisition is due to respiration. Therefore a particle filter (PF) approach has been proposed as a powerful method for motion correction in NM. The probabilistic view of the system in the PF is seen as an advantage that considers the complexity and uncertainties in estimating respiratory motion. Previous tests using XCAT has shown the possibility of estimating unseen organ configuration using training data that only consist of a single respiratory cycle. This paper augments application specific adaptation methods that have been implemented for better PF estimates with an iterative model update step. Results show that errors are further reduced to an extent up to a small number of iterations and such improvements will be advantageous for the PF to cope with more realistic and complex applications.
Canedo-Rodriguez, Adrian; Rodriguez, Jose Manuel; Alvarez-Santos, Victor; Iglesias, Roberto; Regueiro, Carlos V
2015-01-01
In wireless positioning systems, the transmitter's power is usually fixed. In this paper, we explore the use of varying transmission powers to increase the performance of a wireless localization system. To this extent, we have designed a robot positioning system based on wireless motes. Our motes use an inexpensive, low-power sub-1-GHz system-on-chip (CC1110) working in the 433-MHz ISM band. Our localization algorithm is based on a particle filter and infers the robot position by: (1) comparing the power received with the expected one; and (2) integrating the robot displacement. We demonstrate that the use of transmitters that vary their transmission power over time improves the performance of the wireless positioning system significantly, with respect to a system that uses fixed power transmitters. This opens the door for applications where the robot can localize itself actively by requesting the transmitters to change their power in real time. PMID:25942641
Particle filter-based relative rolling estimation algorithm for non-cooperative infrared spacecraft
NASA Astrophysics Data System (ADS)
Li, Zhengzhou; Ge, Fengzeng; Chen, Wenhao; Shao, Wanxing; Liu, Bing; Cheng, Bei
2016-09-01
The issue of feature point mismatching among infrared image sequence would bring big challenge to estimating the relative motion of non-cooperative spacecraft for it couldn't provide the prior knowledge about its geometric structure and motion pattern. The paper introduces particle filter to precisely match the feature points within a desired region predicted by a kinetic equation, and presents a least square estimation-based algorithm to measure the relative rolling motion of non-cooperative spacecraft. The state transition equation and the measurement update equation of non-cooperative spacecraft are represented by establishing its kinetic equations, and then the relative pose measurement is converted to the maximum posteriori probability estimation via assuming the uncertainties about geometric structure and motion pattern as random and time-varying variables. These uncertainties would be interpreted and even solved through continuously measuring the image feature points of the rotating non-cooperative infrared spacecraft. Subsequently, the feature point is matched within a predicted region among sequence infrared image using particle filter algorithm to overcome the position estimation noise caused by the uncertainties of geometric structure and motion pattern. Finally, the position parameters including rotation motion are estimated by means of solving the minimum error of feature point mismatching using least square estimate theory. Both simulated and real infrared image sequences are induced in the experiment to evaluate the performance of the relative rolling estimation, and the experimental data show that the rolling motion estimated by the proposed algorithm is more robust to the feature extraction noise and various rotation speed. Meanwhile, the relative rolling estimation error would increase dramatically with distance and rotation speed increasing.
NASA Astrophysics Data System (ADS)
Liu, Qitao; Li, Yingchun; Sun, Huayan; Zhao, Yanzhong
2008-03-01
Laser active imaging system, which is of high resolution, anti-jamming and can be three-dimensional (3-D) imaging, has been used widely. But its imagery is usually affected by speckle noise which makes the grayscale of pixels change violently, hides the subtle details and makes the imaging resolution descend greatly. Removing speckle noise is one of the most difficult problems encountered in this system because of the poor statistical property of speckle. Based on the analysis of the statistical characteristic of speckle and morphological filtering algorithm, in this paper, an improved multistage morphological filtering algorithm is studied and implemented on TMS320C6416 DSP. The algorithm makes the morphological open-close and close-open transformation by using two different linear structure elements respectively, and then takes a weighted average over the above transformational results. The weighted coefficients are decided by the statistical characteristic of speckle. This algorithm is implemented on the TMS320C6416 DSPs after simulation on computer. The procedure of software design is fully presented. The methods are fully illustrated to achieve and optimize the algorithm in the research of the structural characteristic of TMS320C6416 DSP and feature of the algorithm. In order to fully benefit from such devices and increase the performance of the whole system, it is necessary to take a series of steps to optimize the DSP programs. This paper introduces some effective methods, including refining code structure, eliminating memory dependence, optimizing assembly code via linear assembly and so on, for TMS320C6x C language optimization and then offers the results of the application in a real-time implementation. The results of processing to the images blurred by speckle noise shows that the algorithm can not only effectively suppress speckle noise but also preserve the geometrical features of images. The results of the optimized code running on the DSP platform
Particle Swarm Optimization Approach in a Consignment Inventory System
NASA Astrophysics Data System (ADS)
Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman
2009-09-01
Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.
Order-2 Stability Analysis of Particle Swarm Optimization.
Liu, Qunfeng
2015-01-01
Several stability analyses and stable regions of particle swarm optimization (PSO) have been proposed before. The assumption of stagnation and different definitions of stability are adopted in these analyses. In this paper, the order-2 stability of PSO is analyzed based on a weak stagnation assumption. A new definition of stability is proposed and an order-2 stable region is obtained. Several existing stable analyses for canonical PSO are compared, especially their definitions of stability and the corresponding stable regions. It is shown that the classical stagnation assumption is too strict and not necessary. Moreover, among all these definitions of stability, it is shown that our definition requires the weakest conditions, and additional conditions bring no benefit. Finally, numerical experiments are reported to show that the obtained stable region is meaningful. A new parameter combination of PSO is also shown to be good, even better than some known best parameter combinations. PMID:24738856
A Triangle Mesh Standardization Method Based on Particle Swarm Optimization
Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang
2016-01-01
To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129
Rod-filter-field optimization of the J-PARC RF-driven H- ion source
NASA Astrophysics Data System (ADS)
Ueno, A.; Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.
2015-04-01
In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H- ion beam of 60mA within normalized emittances of 1.5πmm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500μs×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H- ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H- ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM's gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H- ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM's cross-section (magnetmotive force) was indispensable for easy operation with the temperature (TPE) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM's cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for TPE around 60°C.
An Accelerated Particle Swarm Optimization Algorithm on Parametric Optimization of WEDM of Die-Steel
NASA Astrophysics Data System (ADS)
Muthukumar, V.; Suresh Babu, A.; Venkatasamy, R.; Senthil Kumar, N.
2015-01-01
This study employed Accelerated Particle Swarm Optimization (APSO) algorithm to optimize the machining parameters that lead to a maximum Material Removal Rate (MRR), minimum surface roughness and minimum kerf width values for Wire Electrical Discharge Machining (WEDM) of AISI D3 die-steel. Four machining parameters that are optimized using APSO algorithm include Pulse on-time, Pulse off-time, Gap voltage, Wire feed. The machining parameters are evaluated by Taguchi's L9 Orthogonal Array (OA). Experiments are conducted on a CNC WEDM and output responses such as material removal rate, surface roughness and kerf width are determined. The empirical relationship between control factors and output responses are established by using linear regression models using Minitab software. Finally, APSO algorithm, a nature inspired metaheuristic technique, is used to optimize the WEDM machining parameters for higher material removal rate and lower kerf width with surface roughness as constraint. The confirmation experiments carried out with the optimum conditions show that the proposed algorithm was found to be potential in finding numerous optimal input machining parameters which can fulfill wide requirements of a process engineer working in WEDM industry.
Yun, Jong Pil; Jeon, Yong-Ju; Choi, Doo-chul; Kim, Sang Woo
2012-05-01
We propose a new defect detection algorithm for scale-covered steel wire rods. The algorithm incorporates an adaptive wavelet filter that is designed on the basis of lattice parameterization of orthogonal wavelet bases. This approach offers the opportunity to design orthogonal wavelet filters via optimization methods. To improve the performance and the flexibility of wavelet design, we propose the use of the undecimated discrete wavelet transform, and separate design of column and row wavelet filters but with a common cost function. The coefficients of the wavelet filters are optimized by the so-called univariate dynamic encoding algorithm for searches (uDEAS), which searches the minimum value of a cost function designed to maximize the energy difference between defects and background noise. Moreover, for improved detection accuracy, we propose an enhanced double-threshold method. Experimental results for steel wire rod surface images obtained from actual steel production lines show that the proposed algorithm is effective. PMID:22561939
Diesel passenger car PM emissions: From Euro 1 to Euro 4 with particle filter
NASA Astrophysics Data System (ADS)
Tzamkiozis, Theodoros; Ntziachristos, Leonidas; Samaras, Zissis
2010-03-01
This paper examines the impact of the emission control and fuel technology development on the emissions of gaseous and, in particular, PM pollutants from diesel passenger cars. Three cars in five configurations in total were measured, and covered the range from Euro 1 to Euro 4 standards. The emission control ranged from no aftertreatment in the Euro 1 case, an oxidation catalyst in Euro 2, two oxidation catalysts and exhaust gas recirculation in Euro 3 and Euro 4, while a catalyzed diesel particle filter (DPF) fitted in the Euro 4 car led to a Euro 4 + DPF configuration. Both certification test and real-world driving cycles were employed. The results showed that CO and HC emissions were much lower than the emission standard over the hot-start real-world cycles. However, vehicle technologies from Euro 2 to Euro 4 exceeded the NOx and PM emission levels over at least one real-world cycle. The NOx emission level reached up to 3.6 times the certification level in case of the Euro 4 car. PM were up to 40% and 60% higher than certification level for the Euro 2 and Euro 3 cars, while the Euro 4 car emitted close or slightly below the certification level over the real-world driving cycles. PM mass reductions from Euro 1 to Euro 4 were associated with a relevant decrease in the total particle number, in particular over the certification test. This was not followed by a respective reduction in the solid particle number which remained rather constant between the four technologies at 0.86 × 10 14 km -1 (coefficient of variation 9%). As a result, the ratio of solid vs. total particle number ranged from ˜50% in Euro 1-100% in Euro 4. A significant reduction of more than three orders of magnitude in solid particle number is achieved with the introduction of the DPF. However, the potential for nucleation mode formation at high speed from the DPF car is an issue that needs to be considered in the over all assessment of its environmental benefit. Finally, comparison of the
Cho, Kyungmin Jacob; Turkevich, Leonid; Miller, Matthew; McKay, Roy; Grinshpun, Sergey A; Ha, KwonChul; Reponen, Tiina
2013-01-01
This study investigated differences in penetration between fibers and spherical particles through faceseal leakage of an N95 filtering facepiece respirator. Three cyclic breathing flows were generated corresponding to mean inspiratory flow rates (MIF) of 15, 30, and 85 L/min. Fibers had a mean diameter of 1 μm and a median length of 4.9 μm (calculated aerodynamic diameter, d(ae) = 1.73 μm). Monodisperse polystyrene spheres with a mean physical diameter of 1.01 μm (PSI) and 1.54 μm (PSII) were used for comparison (calculated d(ae) = 1.05 and 1.58 μm, respectively). Two optical particle counters simultaneously determined concentrations inside and outside the respirator. Geometric means (GMs) for filter penetration of the fibers were 0.06, 0.09, and 0.08% at MIF of 15, 30, and 85 L/min, respectively. Corresponding values for PSI were 0.07, 0.12, and 0.12%. GMs for faceseal penetration of fibers were 0.40, 0.14, and 0.09% at MIF of 15, 30, and 85 L/min, respectively. Corresponding values for PSI were 0.96, 0.41, and 0.17%. Faceseal penetration decreased with increased breathing rate for both types of particles (p ≤ 0.001). GMs of filter and faceseal penetration of PSII at an MIF of 30 L/min were 0.14% and 0.36%, respectively. Filter penetration and faceseal penetration of fibers were significantly lower than those of PSI (p < 0.001) and PSII (p < 0.003). This confirmed that higher penetration of PSI was not due to slightly smaller aerodynamic diameter, indicating that the shape of fibers rather than their calculated mean aerodynamic diameter is a prevailing factor on deposition mechanisms through the tested respirator. In conclusion, faceseal penetration of fibers and spherical particles decreased with increasing breathing rate, which can be explained by increased capture by impaction. Spherical particles had 2.0-2.8 times higher penetration through faceseal leaks and 1.1-1.5 higher penetration through filter media than fibers, which can be attributed to
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation
Niederhauser, Thomas; Wyss-Balmer, Thomas; Haeberlin, Andreas; Marisa, Thanks; Wildhaber, Reto A; Goette, Josef; Jacomet, Marcel; Vogel, Rolf
2015-06-01
Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here, we present a graphics processor unit (GPU)-based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to autoregressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and four times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a seven-day high-resolution ECG is computed within less than 3 s. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced. PMID:25675449
Video object tracking using improved chamfer matching and condensation particle filter
NASA Astrophysics Data System (ADS)
Wu, Tao; Ding, Xiaoqing; Wang, Shengjin; Wang, Kongqiao
2008-02-01
Object tracking is an essential problem in the field of video and image processing. Although tracking algorithms working on gray video are convenient in actual applications, they are more difficult to be developed than those using color features, since less information is taken into account. Few researches have been dedicated to tracking object using edge information. In this paper, we proposed a novel video tracking algorithm based on edge information for gray videos. This method adopts the combination of a condensation particle filter and an improved chamfer matching. The improved chamfer matching is rotation invariant and capable of estimating the shift between an observed image patch and a template by an orientation distance transform. A modified discriminative likelihood measurement method that focuses on the difference is adopted. These values are normalized and used as the weights of particles which predict and track the object. Experiment results show that our modifications to chamfer matching improve its performance in video tracking problem. And the algorithm is stable, robust, and can effectively handle rotation distortion. Further work can be done on updating the template to adapt to significant viewpoint and scale changes of the appearance of the object during the tracking process.
Object tracking with adaptive HOG detector and adaptive Rao-Blackwellised particle filter
NASA Astrophysics Data System (ADS)
Rosa, Stefano; Paleari, Marco; Ariano, Paolo; Bona, Basilio
2012-01-01
Scenarios for a manned mission to the Moon or Mars call for astronaut teams to be accompanied by semiautonomous robots. A prerequisite for human-robot interaction is the capability of successfully tracking humans and objects in the environment. In this paper we present a system for real-time visual object tracking in 2D images for mobile robotic systems. The proposed algorithm is able to specialize to individual objects and to adapt to substantial changes in illumination and object appearance during tracking. The algorithm is composed by two main blocks: a detector based on Histogram of Oriented Gradient (HOG) descriptors and linear Support Vector Machines (SVM), and a tracker which is implemented by an adaptive Rao-Blackwellised particle filter (RBPF). The SVM is re-trained online on new samples taken from previous predicted positions. We use the effective sample size to decide when the classifier needs to be re-trained. Position hypotheses for the tracked object are the result of a clustering procedure applied on the set of particles. The algorithm has been tested on challenging video sequences presenting strong changes in object appearance, illumination, and occlusion. Experimental tests show that the presented method is able to achieve near real-time performances with a precision of about 7 pixels on standard video sequences of dimensions 320 × 240.
Binary Particle Swarm Optimization based Biclustering of Web Usage Data
NASA Astrophysics Data System (ADS)
Rathipriya, R.; Thangavel, K.; Bagyamani, J.
2011-07-01
Web mining is the nontrivial process to discover valid, novel, potentially useful knowledge from web data using the data mining techniques or methods. It may give information that is useful for improving the services offered by web portals and information access and retrieval tools. With the rapid development of biclustering, more researchers have applied the biclustering technique to different fields in recent years. When biclustering approach is applied to the web usage data it automatically captures the hidden browsing patterns from it in the form of biclusters. In this work, swarm intelligent technique is combined with biclustering approach to propose an algorithm called Binary Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The main objective of this algorithm is to retrieve the global optimal bicluster from the web usage data. These biclusters contain relationships between web users and web pages which are useful for the E-Commerce applications like web advertising and marketing. Experiments are conducted on real dataset to prove the efficiency of the proposed algorithms.
NASA Astrophysics Data System (ADS)
Galatus, Ramona; Valles, Juan
2016-04-01
The optimized geometry based on high-order active microring resonators (MRR) geometry is proposed. The solution possesses both the filtering and amplifying functions for the signal at around 1534nm (pump 976 nm). The cross-grid resonator with laterally, series-coupled triple-microrings, having 15.35μm radius, in a co-propagation topology between signal and pump, is the structure under analysis (commonly termed an add-drop filter).
Optimizing magnetite nanoparticles for mass sensitivity in magnetic particle imaging
Ferguson, R. Matthew; Minard, Kevin R.; Khandhar, Amit P.; Krishnan, Kannan M.
2011-01-01
Purpose: Magnetic particle imaging (MPI), using magnetite nanoparticles (MNPs) as tracer material, shows great promise as a platform for fast tomographic imaging. To date, the magnetic properties of MNPs used in imaging have not been optimized. As nanoparticle magnetism shows strong size dependence, the authors explore how varying MNP size impacts imaging performance in order to determine optimal MNP characteristics for MPI at any driving field frequency f0. Methods: Monodisperse MNPs of varying size were synthesized and their magnetic properties characterized. Their MPI response was measured experimentally using a custom-built MPI transceiver designed to detect the third harmonic of MNP magnetization. The driving field amplitude H0=6 mT μ0−1 and frequency f0=250 kHz were chosen to be suitable for imaging small animals. Experimental results were interpreted using a model of dynamic MNP magnetization that is based on the Langevin theory of superparamagnetism and accounts for sample size distribution and size-dependent magnetic relaxation. Results: The experimental results show a clear variation in the MPI signal intensity as a function of MNP diameter that is in agreement with simulated results. A maximum in the plot of MPI signal vs MNP size indicates there is a particular size that is optimal for the chosen f0. Conclusions: The authors observed that MNPs 15 nm in diameter generate maximum signal amplitude in MPI experiments at 250 kHz. The authors expect the physical basis for this result, the change in magnetic relaxation with MNP size, will impact MPI under other experimental conditions. PMID:21520874
Microwave-based medical diagnosis using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Modiri, Arezoo
This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level
Agyei-Aye, K; Appleton, S; Rogers, R A; Taylor, C R
2004-08-01
This experiment was designed to study the release of cellulose acetate fibers, charcoal, and other particles from cigarettes with charcoal and activated charcoal/resin filters. For the first time in such studies, efforts were made to identify the particles that were eluted using other analytical techniques in addition to light microscopy. Other corrective measures were also implemented. During the studies it was found that trimming of larger filters to fit smaller filter housings introduced cellulose acetate-like particles from the fibers of the filter material. Special, custom made-to-fit filters were used instead. Tools such as forceps that were used to retrieve filters from their housings were also found to introduce fragments onto the filters. It is believed that introduction of such debris may have accounted for the very large number of cellulose acetate and charcoal particles that had been reported in the literature. Use of computerized particle-counting microscopes appeared to result in excessive number of particles. This could be because the filter or smoke pads used for such work do not have the flat and level surfaces ideal for computerized particle-counting microscopes. At the high magnifications that the pads were viewed for particles, constant focusing of the microscope would be essential. It was also found that determination of total particles by using extrapolation of particle count by grid population usually gave extremely high particle counts compared to the actual number of particles present. This could be because particle distributions during smoking are not uniform. Lastly, a less complex estimation of the thickness of the particles was adopted. This and the use of a simple mathematical conversion coupled with the Cox equation were utilized to assess the aerodynamic diameters of the particles. Our findings showed that compared to numbers quoted in the literature, only a small amount of charcoal, cellulose acetate shards, and other particles are
NASA Astrophysics Data System (ADS)
Zhu, Jianhua; Wan, Lei; Nie, Guosheng; Guo, Xiaowei
2003-12-01
In this paper, as far as we know, it is the first time that a novel acousto-optic pure rotational Raman lidar based on acousto-optic tunable filter (AOTF) is put forward for the application of atmospheric temperature measurements. AOTF is employed in the novel lidar system as narrow band-pass filter and high-speed single-channel wavelength scanner. This new acousto-optic filtering technique can solve the problems of conventional pure rotational Raman lidar, e.g., low temperature detection sensitivity, untunability of filtering parameters, and signal interference between different detection channels. This paper will focus on the PRRS physical model calculation and simulation optimization of system parameters such as the central wavelengths and the bandwidths of filtering operation, and the required sensitivity. The theoretical calculations and optimization of AOTF spectral filtering parameters are conducted to achieve high temperature dependence and sensitivity, high signal intensities, high temperature of filtered spectral passbands, and adequate blocking of elastic Mie and Rayleigh scattering signals. The simulation results can provide suitable proposal and theroetical evaluation before the integration of a practical Raman lidar system.
NASA Astrophysics Data System (ADS)
Howard-Reed, Cynthia; Wallace, Lance A.; Emmerich, Steven J.
Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we measured the deposition of particles ranging from 0.3 to 10 μm in an occupied townhouse and also in an unoccupied test house. Experiments were run with three different sources (cooking with a gas stove, citronella candle, pouring kitty litter), with the central heating and air conditioning (HAC) fan on or off, and with two different types of in-duct filters (electrostatic precipitator and ordinary furnace filter). Particle size, HAC fan operation, and the electrostatic precipitator had significant effects on particle loss rates. The standard furnace filter had no effect. Surprisingly, the type of source (combustion vs. mechanical generation) and the type of furnishings (fully furnished including carpet vs. largely unfurnished including mostly bare floor) also had no measurable effect on the deposition rates of particles of comparable size. With the HAC fan off, average deposition rates varied from 0.3 h -1 for the smallest particle range (0.3-0.5 μm) to 5.2 h -1 for particles greater than 10 μm. Operation of the central HAC fan approximately doubled these rates for particles <5 μm, and increased rates by 2 h -1 for the larger particles. An in-duct electrostatic precipitator increased the loss rates compared to the fan-off condition by factors of 5-10 for particles <2.5 μm, and by a factor of 3 for 2.5-5.0 μm particles. In practical terms, use of the central fan alone could reduce indoor particle concentrations by 25-50%, and use of an in-duct ESP could reduce particle concentrations by 55-85% compared to fan-off conditions.
Field, Matthew A; Cho, Vicky; Andrews, T Daniel; Goodnow, Chris C
2015-01-01
A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality 'genome in a bottle' reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436
Field, Matthew A.; Cho, Vicky
2015-01-01
A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality ‘genome in a bottle’ reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436
NASA Astrophysics Data System (ADS)
Kela, K. B.; Arya, L. D.
2014-09-01
This paper describes a methodology for determination of optimum failure rate and repair time for each section of a radial distribution system. An objective function in terms of reliability indices and their target values is selected. These indices depend mainly on failure rate and repair time of a section present in a distribution network. A cost is associated with the modification of failure rate and repair time. Hence the objective function is optimized subject to failure rate and repair time of each section of the distribution network considering the total budget allocated to achieve the task. The problem has been solved using differential evolution and bare bones particle swarm optimization. The algorithm has been implemented on a sample radial distribution system.
The absorptivity and imaginary index of refraction for carbon and methylene blue particles were inferred from the photoacoustic spectra of samples collected on Teflon filter substrates. Three models of varying complexity were developed to describe the photoacoustic signal as a fu...
Nathan, Viswam; Akkaya, Ilge; Jafari, Roozbeh
2015-01-01
In this work, we describe a methodology to probabilistically estimate the R-peak locations of an electrocardiogram (ECG) signal using a particle filter. This is useful for heart rate estimation, which is an important metric for medical diagnostics. Some scenarios require constant in-home monitoring using a wearable device. This poses a particularly challenging environment for heart rate detection, due to the susceptibility of ECG signals to motion artifacts. In this work, we show how the particle filter can effectively track the true R-peak locations amidst the motion artifacts, given appropriate heart rate and R-peak observation models. A particle filter based framework has several advantages due to its freedom from strict assumptions on signal and noise models, as well as its ability to simultaneously track multiple possible heart rate hypotheses. Moreover, the proposed framework is not exclusive to ECG signals and could easily be leveraged for tracking other physiological parameters. We describe the implementation of the particle filter and validate our approach on real ECG data affected by motion artifacts from the MIT-BIH noise stress test database. The average heart rate estimation error is about 5 beats per minute for signal streams contaminated with noisy segments with SNR as low as -6 dB. PMID:26737796
NASA Astrophysics Data System (ADS)
Lin, Liangkui; Xu, Hui; An, Wei; Sheng, Weidong; Xu, Dan
2011-11-01
This paper presents a novel approach to tracking a large number of closely spaced objects (CSO) in image sequences that is based on the particle probability hypothesis density (PHD) filter and multiassignment data association. First, the particle PHD filter is adopted to eliminate most of the clutters and to estimate multitarget states. In the particle PHD filter, a noniterative multitarget estimation technique is introduced to reliably estimate multitarget states, and an improved birth particle sampling scheme is present to effectively acquire targets among clutters. Then, an integrated track management method is proposed to realize multitarget track continuity. The core of the track management is the track-to-estimation multiassignment association, which relaxes the traditional one-to-one data association restriction due to the unresolved focal plane CSO measurements. Meanwhile, a unified technique of multiple consecutive misses for track deletion is used jointly to cope with the sensitivity of the PHD filter to the missed detections and to eliminate false alarms further, as well as to initiate tracks of large numbers of CSO. Finally, results of two simulations and one experiment show that the proposed approach is feasible and efficient.
Application of digital tomosynthesis (DTS) of optimal deblurring filters for dental X-ray imaging
NASA Astrophysics Data System (ADS)
Oh, J. E.; Cho, H. S.; Kim, D. S.; Choi, S. I.; Je, U. K.
2012-04-01
Digital tomosynthesis (DTS) is a limited-angle tomographic technique that provides some of the tomographic benefits of computed tomography (CT) but at reduced dose and cost. Thus, the potential for application of DTS to dental X-ray imaging seems promising. As a continuation of our dental radiography R&D, we developed an effective DTS reconstruction algorithm and implemented it in conjunction with a commercial dental CT system for potential use in dental implant placement. The reconstruction algorithm employed a backprojection filtering (BPF) method based upon optimal deblurring filters to suppress effectively both the blur artifacts originating from the out-focus planes and the high-frequency noise. To verify the usefulness of the reconstruction algorithm, we performed systematic simulation works and evaluated the image characteristics. We also performed experimental works in which DTS images of enhanced anatomical resolution were successfully obtained by using the algorithm and were promising to our ongoing applications to dental X-ray imaging. In this paper, our approach to the development of the DTS reconstruction algorithm and the results are described in detail.
NASA Astrophysics Data System (ADS)
Makino, Yohei; Fujii, Toshinori; Imai, Jun; Funabiki, Shigeyuki
Recently, it is desired to develop energy control technologies for environmental issues such as global warming and exhaustion of fossil fuel. Power fluctuations in large power consumers may cause the instability of electric power systems and increase the cost of the electric power facility and electricity charges. Developing the electric power-leveling systems (EPLS) to compensate the power fluctuations is necessary for future electric power systems. Now, EPLS with an SMES have been proposed as one of the countermeasures for the electric power quality improvement. The SMES is superior to other energy storage devices in response and storage efficiency. The authors proposed the EPLS based on fussy control with the SMES. For this practical implementation, optimizing control gain and SMES capacity is an important issue. This paper proposes a new optimization method of the EPLS. The proposed algorithm is novel particle swarm optimization based on taper-off reflectance (TRPSO). The proposed TRPSO optimizes the design variables of the EPLS efficiently and effectively.
An optimized strain demodulation method for PZT driven fiber Fabry-Perot tunable filter
NASA Astrophysics Data System (ADS)
Sheng, Wenjuan; Peng, G. D.; Liu, Yang; Yang, Ning
2015-08-01
An optimized strain-demodulation-method based on piezo-electrical transducer (PZT) driven fiber Fabry-Perot (FFP) filter is proposed and experimentally demonstrated. Using a parallel processing mode to drive the PZT continuously, the hysteresis effect is eliminated, and the system demodulation rate is increased. Furthermore, an AC-DC compensation method is developed to address the intrinsic nonlinear relationship between the displacement and voltage of PZT. The experimental results show that the actual demodulation rate is improved from 15 Hz to 30 Hz, the random error of the strain measurement is decreased by 95%, and the deviation between the test values after compensation and the theoretical values is less than 1 pm/με.
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
NASA Astrophysics Data System (ADS)
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
Panigrahi, Swapnesh; Fade, Julien; Ramachandran, Hema; Alouini, Mehdi
2016-07-11
The efficiency of using intensity modulated light for the estimation of scattering properties of a turbid medium and for ballistic photon discrimination is theoretically quantified in this article. Using the diffusion model for modulated photon transport and considering a noisy quadrature demodulation scheme, the minimum-variance bounds on estimation of parameters of interest are analytically derived and analyzed. The existence of a variance-minimizing optimal modulation frequency is shown and its evolution with the properties of the intervening medium is derived and studied. Furthermore, a metric is defined to quantify the efficiency of ballistic photon filtering which may be sought when imaging through turbid media. The analytical derivation of this metric shows that the minimum modulation frequency required to attain significant ballistic discrimination depends only on the reduced scattering coefficient of the medium in a linear fashion for a highly scattering medium. PMID:27410875
Yatabe, Kohei; Oikawa, Yasuhiro
2016-06-10
The windowed Fourier filtering (WFF), defined as a thresholding operation in the windowed Fourier transform (WFT) domain, is a successful method for denoising a phase map and analyzing a fringe pattern. However, it has some shortcomings, such as extremely high redundancy, which results in high computational cost, and difficulty in selecting an appropriate window size. In this paper, an extension of WFF for denoising a wrapped-phase map is proposed. It is formulated as a convex optimization problem using Gabor frames instead of WFT. Two Gabor frames with differently sized windows are used simultaneously so that the above-mentioned issues are resolved. In addition, a differential operator is combined with a Gabor frame in order to preserve discontinuity of the underlying phase map better. Some numerical experiments demonstrate that the proposed method is able to reconstruct a wrapped-phase map, even for a severely contaminated situation. PMID:27409020
Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen
2009-01-15
A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.
Song, Hoon; Sung, Geeyoung; Choi, Sujin; Won, Kanghee; Lee, Hong-Seok; Kim, Hwi
2012-12-31
We propose an optical system for synthesizing double-phase complex computer-generated holograms using a phase-only spatial light modulator and a phase grating filter. Two separated areas of the phase-only spatial light modulator are optically superposed by 4-f configuration with an optimally designed grating filter to synthesize arbitrary complex optical field distributions. The tolerances related to misalignment factors are analyzed, and the optimal synthesis method of double-phase computer-generated holograms is described. PMID:23388811
Sims
2000-06-01
Movements of six basking sharks (4.0-6.5 m total body length, L(T)) swimming at the surface were tracked and horizontal velocities determined. Sharks were tracked for between 1.8 and 55 min with between 4 and 21 mean speed determinations per shark track. The mean filter-feeding swimming speed was 0.85 m s(-1) (+/-0.05 S.E., n=49 determinations) compared to the non-feeding (cruising) mean speed of 1.08 m s(-1) (+/-0.03 S.E., n=21 determinations). Both absolute (m s(-1)) and specific (L s(-1)) swimming speeds during filter-feeding were significantly lower than when cruise swimming with the mouth closed, indicating basking sharks select speeds approximately 24% lower when engaged in filter-feeding. This reduction in speed during filter-feeding could be a behavioural response to avoid increased drag-induced energy costs associated with feeding at higher speeds. Non-feeding basking sharks (4 m L(T)) cruised at speeds close to, but slightly faster ( approximately 18%) than the optimum speed predicted by the Weihs (1977) [Weihs, D., 1977. Effects of size on the sustained swimming speeds of aquatic organisms. In: Pedley, T.J. (Ed.), Scale Effects in Animal Locomotion. Academic Press, London, pp. 333-338.] optimal cruising speed model. In contrast, filter-feeding basking sharks swam between 29 and 39% slower than the speed predicted by the Weihs and Webb (1983) [Weihs, D., Webb, P.W., 1983. Optimization of locomotion. In: Webb, P.W., Weihs, D. (Eds.), Fish Biomechanics. Praeger, New York, pp. 339-371.] optimal filter-feeding model. This significant under-estimation in observed feeding speed compared to model predictions was most likely accounted for by surface drag effects reducing optimum speeds of tracked sharks, together with inaccurate parameter estimates used in the general model to predict optimal speeds of basking sharks from body size extrapolations. PMID:10817828
The first on-site evaluation of a new filter optimized for TARC and developer
NASA Astrophysics Data System (ADS)
Umeda, Toru; Ishibashi, Takeo; Nakamura, Atsushi; Ide, Junichi; Nagano, Masaru; Omura, Koichi; Tsuzuki, Shuichi; Numaguchi, Toru
2008-11-01
In previous studies, we identified filter properties that have a strong effect on microbubble formation on the downstream side of the filter membrane. A new Highly Asymmetric Polyarylsulfone (HAPAS) filter was developed based on the findings. In the current study, we evaluated newly-developed HAPAS filter in environmentally preferred non-PFOS TARC in a laboratory setting. Test results confirmed that microbubble counts downstream of the filter were lower than those of a conventional HDPE filter. Further testing in a manufacturing environment confirmed that HAPAS filtration of TARC at point of use was able to reduce defectivity caused by microbubbles on both unpatterned and patterned wafers, compared with a HDPE filter.
Particle swarm-based structural optimization of laminated composite hydrokinetic turbine blades
NASA Astrophysics Data System (ADS)
Li, H.; Chandrashekhara, K.
2015-09-01
Composite blade manufacturing for hydrokinetic turbine application is quite complex and requires extensive optimization studies in terms of material selection, number of layers, stacking sequence, ply thickness and orientation. To avoid a repetitive trial-and-error method process, hydrokinetic turbine blade structural optimization using particle swarm optimization was proposed to perform detailed composite lay-up optimization. Layer numbers, ply thickness and ply orientations were optimized using standard particle swarm optimization to minimize the weight of the composite blade while satisfying failure evaluation. To address the discrete combinatorial optimization problem of blade stacking sequence, a novel permutation discrete particle swarm optimization model was also developed to maximize the out-of-plane load-carrying capability of the composite blade. A composite blade design with significant material saving and satisfactory performance was presented. The proposed methodology offers an alternative and efficient design solution to composite structural optimization which involves complex loading and multiple discrete and combinatorial design parameters.
GENERAL: Optimal Schemes of Teleportation One-Particle State by a Three-Particle General W State
NASA Astrophysics Data System (ADS)
Zha, Xin-Wei; Song, Hai-Yang
2010-05-01
Recently, Xiu et al. [Common. Theor. Phys. 49 (2008) 905] proposed two schemes of teleporting an N particle arbitrary and unknown state when N groups of three particle general W states are utilized as quantum channels. They gave the maximal probability of successful teleportation. Here we find that their operation is not the optimal and the successful probability of the teleportation is not maximum. Moreover, we give the optimal schemes operation and obtain maximal successful probability for teleportation.
Optimal spatial filtering for design of a conformal velocity sonar array
NASA Astrophysics Data System (ADS)
Traweek, Charles M.
In stark contrast to the ubiquitous optimization problem posed in the array processing literature, tactical hull sonar arrays have traditionally been designed using extrapolations of low spatial resolution empirical self noise data, dominated by hull noise at moderate speeds, in conjunction with assumptions regarding achievable conventional beamformer sidelobe levels by so-called Taylor shading for a time domain, delay-and-sum beamformer. That ad hoc process defaults to an extremely conservative (expensive and heavy) design for an array baffle as a means to assure environmental noise limited sonar performance. As an alternative, this dissertation formulates, implements, and demonstrates an objective function that results from the expression of the log likelihood ratio of the optimal Bayesian detector as a comparison to a threshold. Its purpose is to maximize the deflection coefficient of a square-law energy detector over an arbitrarily specified frequency band by appropriate selection of array shading weights for the generalized conformal velocity sonar array under the assumption that it will employ the traditional time domain delay-and-sum beamformer. The restrictive assumptions that must be met in order to appropriately use the deflection coefficient as a performance metric are carefully delineated. A series of conformal velocity sonar array spatial filter optimization problems was defined using a data set characterized by spatially complex structural noise from a large aperture conformal velocity sonar array experiment. The detection performance of an 80-element cylindrical array was optimized over a reasonably broad range of frequencies (from k0a = 12.95 to k 0a = 15.56) for the cases of broadside and off-broadside signal incidence. In each case, performance of the array using optimal real-valued time domain delay-and-sum beamformer weights was much better than that achieved for either uniform shading or for Taylor shading. The result is an analytical engine
Diesel particle filter and fuel effects on heavy-duty diesel engine emissions.
Ratcliff, Matthew A; Dane, A John; Williams, Aaron; Ireland, John; Luecke, Jon; McCormick, Robert L; Voorhees, Kent J
2010-11-01
The impacts of biodiesel and a continuously regenerated (catalyzed) diesel particle filter (DPF) on the emissions of volatile unburned hydrocarbons, carbonyls, and particle associated polycyclic aromatic hydrocarbons (PAH) and nitro-PAH, were investigated. Experiments were conducted on a 5.9 L Cummins ISB, heavy-duty diesel engine using certification ultra-low-sulfur diesel (ULSD, S ≤ 15 ppm), soy biodiesel (B100), and a 20% blend thereof (B20). Against the ULSD baseline, B20 and B100 reduced engine-out emissions of measured unburned volatile hydrocarbons and PM associated PAH and nitro-PAH by significant percentages (40% or more for B20 and higher percentage for B100). However, emissions of benzene were unaffected by the presence of biodiesel and emissions of naphthalene actually increased for B100. This suggests that the unsaturated FAME in soy-biodiesel can react to form aromatic rings in the diesel combustion environment. Methyl acrylate and methyl 3-butanoate were observed as significant species in the exhaust for B20 and B100 and may serve as markers of the presence of biodiesel in the fuel. The DPF was highly effective at converting gaseous hydrocarbons and PM associated PAH and total nitro-PAH. However, conversion of 1-nitropyrene by the DPF was less than 50% for all fuels. Blending of biodiesel caused a slight reduction in engine-out emissions of acrolein, but otherwise had little effect on carbonyl emissions. The DPF was highly effective for conversion of carbonyls, with the exception of formaldehyde. Formaldehyde emissions were increased by the DPF for ULSD and B20. PMID:20886845
Particle swarm optimization algorithm based low cost magnetometer calibration
NASA Astrophysics Data System (ADS)
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
OPTIMIZATION OF COAL PARTICLE FLOW PATTERNS IN LOW NOX BURNERS
Jost O.L. Wendt; Gregory E. Ogden; Jennifer Sinclair; Stephanus Budilarto
2001-09-04
It is well understood that the stability of axial diffusion flames is dependent on the mixing behavior of the fuel and combustion air streams. Combustion aerodynamic texts typically describe flame stability and transitions from laminar diffusion flames to fully developed turbulent flames as a function of increasing jet velocity. Turbulent diffusion flame stability is greatly influenced by recirculation eddies that transport hot combustion gases back to the burner nozzle. This recirculation enhances mixing and heats the incoming gas streams. Models describing these recirculation eddies utilize conservation of momentum and mass assumptions. Increasing the mass flow rate of either fuel or combustion air increases both the jet velocity and momentum for a fixed burner configuration. Thus, differentiating between gas velocity and momentum is important when evaluating flame stability under various operating conditions. The research efforts described herein are part of an ongoing project directed at evaluating the effect of flame aerodynamics on NO{sub x} emissions from coal fired burners in a systematic manner. This research includes both experimental and modeling efforts being performed at the University of Arizona in collaboration with Purdue University. The objective of this effort is to develop rational design tools for optimizing low NO{sub x} burners. Experimental studies include both cold-and hot-flow evaluations of the following parameters: primary and secondary inlet air velocity, coal concentration in the primary air, coal particle size distribution and flame holder geometry. Hot-flow experiments will also evaluate the effect of wall temperature on burner performance.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.
Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm
Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945
NASA Astrophysics Data System (ADS)
Yan, Hongxiang; Moradkhani, Hamid
2016-08-01
Assimilation of satellite soil moisture and streamflow data into a distributed hydrologic model has received increasing attention over the past few years. This study provides a detailed analysis of the joint and separate assimilation of streamflow and Advanced Scatterometer (ASCAT) surface soil moisture into a distributed Sacramento Soil Moisture Accounting (SAC-SMA) model, with the use of recently developed particle filter-Markov chain Monte Carlo (PF-MCMC) method. Performance is assessed over the Salt River Watershed in Arizona, which is one of the watersheds without anthropogenic effects in Model Parameter Estimation Experiment (MOPEX). A total of five data assimilation (DA) scenarios are designed and the effects of the locations of streamflow gauges and the ASCAT soil moisture on the predictions of soil moisture and streamflow are assessed. In addition, a geostatistical model is introduced to overcome the significantly biased satellite soil moisture and also discontinuity issue. The results indicate that: (1) solely assimilating outlet streamflow can lead to biased soil moisture estimation; (2) when the study area can only be partially covered by the satellite data, the geostatistical approach can estimate the soil moisture for those uncovered grid cells; (3) joint assimilation of streamflow and soil moisture from geostatistical modeling can further improve the surface soil moisture prediction. This study recommends that the geostatistical model is a helpful tool to aid the remote sensing technique and the hydrologic DA study.
Incorporating advanced language models into the P300 speller using particle filtering
NASA Astrophysics Data System (ADS)
Speier, W.; Arnold, C. W.; Deshpande, A.; Knall, J.; Pouratian, N.
2015-08-01
Objective. The P300 speller is a common brain-computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach. Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main result. This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance. These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance.
Particle Filters and Occlusion Handling for Rigid 2D-3D Pose Tracking.
Lee, Jehoon; Sandhu, Romeil; Tannenbaum, Allen
2013-08-01
In this paper, we address the problem of 2D-3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the object's pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios. PMID:24058277
NASA Astrophysics Data System (ADS)
Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.
2016-06-01
The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.
Incorporating advanced language models into the P300 speller using particle filtering
Speier, W; Arnold, CW; Deshpande, A; Knall, J
2015-01-01
Objective The P300 speller is a common brain–computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram (EEG) signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main Result This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance. PMID:26061188
Air quality benefits of universal particle filter and NOx controls on diesel trucks
NASA Astrophysics Data System (ADS)
Tao, L.; Mcdonald, B. C.; Harley, R.
2015-12-01
Heavy-duty diesel trucks are a major source of black carbon/particulate matter and nitrogen oxide emissions on urban and regional scales. These emissions are relevant to both air quality and climate change. Since 2010 in the US, new engines are required to be equipped with emission control systems that greatly reduce both PM and NOx emissions, by ~98% relative to 1988 levels. To reduce emissions from the legacy fleet of older trucks that still remain on the road, regulations have been adopted in Califonia to accelerate the replacement of older trucks and thereby reduce associated emissions of PM and NOx. Use of diesel particle filters will be widespread by 2016, and universal use of catalytic converters for NOx control is required by 2023. We assess the air quality consequences of this clean-up effort in Southern California, using the Community Multiscale Air Quality model (CMAQ), and comparing three scenarios: historical (2005), present day (2016), and future year (2023). Emissions from the motor vehicle sector are mapped at high spatial resolution based on traffic count and fuel sales data. NOx emissions from diesel engines in 2023 are expected to decrease by ~80% compared to 2005, while the fraction of NOx emitted as NO2 is expected to increase from 5 to 18%. Air quality model simulations will be analyzed to quantify changes in NO2, black carbon, particulate matter, and ozone, both basin-wide and near hot spots such as ports and major highways.
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING
Bayard, David S.; Schumitzky, Alan
2009-01-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches. PMID:21132112
Rodgers, Billy R.; Edwards, Michael S.
1977-01-01
Solids such as char, ash, and refractory organic compounds are removed from coal-derived liquids from coal liquefaction processes by the pressure precoat filtration method using particles of 85-350 mesh material selected from the group of bituminous coal, anthracite coal, lignite, and devolatilized coals as precoat materials and as body feed to the unfiltered coal-derived liquid.
Röhl, R; McClenny, W A; Palmer, R A
1982-02-01
The absorptivity of soot and methylene blue particles collected on Teflon filters is derived from photoacoustic measurements by least squares fitting a simple expression based on Beer's law to the experimental data. Refinements of the expression take into account the diffuse reflection of light by the filter substrate, yielding a base 10 absorptivity at 600 nm for soot of 3.00 +/- 0.37 m(2)/g. This value is in close agreement with the result of transmission measurements performed on the same samples (3.08 +/- 0.05 m(2)/g). PMID:20372465
Maj, Jean-Baptiste; Royackers, Liesbeth; Moonen, Marc; Wouters, Jan
2005-09-01
In this paper, the first real-time implementation and perceptual evaluation of a singular value decomposition (SVD)-based optimal filtering technique for noise reduction in a dual microphone behind-the-ear (BTE) hearing aid is presented. This evaluation was carried out for a speech weighted noise and multitalker babble, for single and multiple jammer sound source scenarios. Two basic microphone configurations in the hearing aid were used. The SVD-based optimal filtering technique was compared against an adaptive beamformer, which is known to give significant improvements in speech intelligibility in noisy environment. The optimal filtering technique works without assumptions about a speaker position, unlike the two-stage adaptive beamformer. However this strategy needs a robust voice activity detector (VAD). A method to improve the performance of the VAD was presented and evaluated physically. By connecting the VAD to the output of the noise reduction algorithms, a good discrimination between the speech-and-noise periods and the noise-only periods of the signals was obtained. The perceptual experiments demonstrated that the SVD-based optimal filtering technique could perform as well as the adaptive beamformer in a single noise source scenario, i.e., the ideal scenario for the latter technique, and could outperform the adaptive beamformer in multiple noise source scenarios. PMID:16189969
Gupta, A.
1992-01-01
The effect of humidity, particle hygroscopicity and size on the mass loading capacity of glass fiber HEPA filters has been studied. At humidifies above the deliquescent point, the pressure drop across the HEPA filter increased non-linearly with the areal loading density (mass collected/filtration area) of NaCl aerosol, thus significantly reducing the mass loading capacity of the filter compared to dry hygroscopic or non-hygroscopic particle mass loadings. The specific cake resistance, K{sub 2}, has been computed for different test conditions and used as a measure of the mass loading capacity. K. was found to decrease with increasing humidity for the non-hygroscopic aluminum oxide particles and the hygroscopic NaCl particles (at humidities below the deliquescent point). It is postulated that an increase in humidity leads to the formation of a more open particulate cake which lowers the pressure drop for a given mass loading. A formula for predicting K{sub 2} for lognormally distributed aerosols (parameters obtained from impactor data) is derived. The resistance factor, R, calculated using this formula was compared to the theoretical R calculated using the Rudnick-Happel expression. For the non-hygroscopic aluminum oxide the agreement was good but for the hygroscopic sodium chloride, due to large variation in the cake porosity estimates, the agreement was poor.
Gupta, A.
1992-09-01
The effect of humidity, particle hygroscopicity and size on the mass loading capacity of glass fiber HEPA filters has been studied. At humidifies above the deliquescent point, the pressure drop across the HEPA filter increased non-linearly with the areal loading density (mass collected/filtration area) of NaCl aerosol, thus significantly reducing the mass loading capacity of the filter compared to dry hygroscopic or non-hygroscopic particle mass loadings. The specific cake resistance, K{sub 2}, has been computed for different test conditions and used as a measure of the mass loading capacity. K. was found to decrease with increasing humidity for the non-hygroscopic aluminum oxide particles and the hygroscopic NaCl particles (at humidities below the deliquescent point). It is postulated that an increase in humidity leads to the formation of a more open particulate cake which lowers the pressure drop for a given mass loading. A formula for predicting K{sub 2} for lognormally distributed aerosols (parameters obtained from impactor data) is derived. The resistance factor, R, calculated using this formula was compared to the theoretical R calculated using the Rudnick-Happel expression. For the non-hygroscopic aluminum oxide the agreement was good but for the hygroscopic sodium chloride, due to large variation in the cake porosity estimates, the agreement was poor.
Guan, Fada; Bronk, Lawrence; Titt, Uwe; Lin, Steven H.; Mirkovic, Dragan; Kerr, Matthew D.; Zhu, X. Ronald; Dinh, Jeffrey; Sobieski, Mary; Stephan, Clifford; Peeler, Christopher R.; Taleei, Reza; Mohan, Radhe; Grosshans, David R.
2015-01-01
The physical properties of particles used in radiation therapy, such as protons, have been well characterized, and their dose distributions are superior to photon-based treatments. However, proton therapy may also have inherent biologic advantages that have not been capitalized on. Unlike photon beams, the linear energy transfer (LET) and hence biologic effectiveness of particle beams varies along the beam path. Selective placement of areas of high effectiveness could enhance tumor cell kill and simultaneously spare normal tissues. However, previous methods for mapping spatial variations in biologic effectiveness are time-consuming and often yield inconsistent results with large uncertainties. Thus the data needed to accurately model relative biological effectiveness to guide novel treatment planning approaches are limited. We used Monte Carlo modeling and high-content automated clonogenic survival assays to spatially map the biologic effectiveness of scanned proton beams with high accuracy and throughput while minimizing biological uncertainties. We found that the relationship between cell kill, dose, and LET, is complex and non-unique. Measured biologic effects were substantially greater than in most previous reports, and non-linear surviving fraction response was observed even for the highest LET values. Extension of this approach could generate data needed to optimize proton therapy plans incorporating variable RBE. PMID:25984967
Wensing, Michael; Schripp, Tobias; Uhde, Erik; Salthammer, Tunga
2008-12-15
The release of ultra-fine particles (UFP, d < 0.1 microm) from hardcopy devices such as laser printers into the indoor environment is currently a topic of high concern. The general emission behavior of a printer can be examined by conducting emission test chamber measurements with particle-counting devices. Chamber experiments with modified laser printers operated without toner or paper also revealed UFP emissions. On the basis of these results we reasonably doubt the opinion that UFPs primarily originate from the toner. Instead, the high-temperature fuser unit is assumed to be one source for ultra-fine particle emission. UFP release typically follows the flow path of the cooling air which may leave the printer casing at various points (e.g. the paper tray). This limits the usability of the commercial filter systems available because the released particles could leave the printer without passing through the filter. Chamber measurements with various filter systems retrofitted to a laser printer demonstrate different efficiencies of UFP reduction. Complementary experiments were carried out in an office room. Here the decay of the particle concentration after a print job was about ten times slower than in the test chamber. A toxicological assessment of the emitted particles requires that their chemical composition be known. Due to the low mass of the released UFPs chemical analysis needs a prior enrichment on a feasible media. Experiments using electrostatic precipitation showed a flame retardant (tri-xylyl phosphate) whose concentration on the media was dependent on the number of pages printed. Whether this compound was particle-bound could not be determined. PMID:18809204
NASA Astrophysics Data System (ADS)
Khaki, Mehdi; Forootan, Ehsan; Kuhn, Michael; Awange, Joseph; Pattiaratchi, Charitha
2016-04-01
Quantifying large-scale (basin/global) water storage changes is essential to understand the Earth's hydrological water cycle. Hydrological models have usually been used to simulate variations in storage compartments resulting from changes in water fluxes (i.e., precipitation, evapotranspiration and runoff) considering physical or conceptual frameworks. Models however represent limited skills in accurately simulating the storage compartments that could be the result of e.g., the uncertainty of forcing parameters, model structure, etc. In this regards, data assimilation provides a great chance to combine observational data with a prior forecast state to improve both the accuracy of model parameters and to improve the estimation of model states at the same time. Various methods exist that can be used to perform data assimilation into hydrological models. The one more frequently used particle-based algorithms suitable for non-linear systems high-dimensional systems is the Ensemble Kalman Filtering (EnKF). Despite efficiency and simplicity (especially in EnKF), this method indicate some drawbacks. To implement EnKF, one should use the sample covariance of observations and model state variables to update a priori estimates of the state variables. The sample covariance can be suboptimal as a result of small ensemble size, model errors, model nonlinearity, and other factors. Small ensemble can also lead to the development of correlations between state components that are at a significant distance from one another where there is no physical relation. To investigate the under-sampling issue raise by EnKF, covariance inflation technique in conjunction with localization was implemented. In this study, a comparison between latest methods used in the data assimilation framework, to overcome the mentioned problem, is performed. For this, in addition to implementing EnKF, we introduce and apply the Local Ensemble Kalman Filter (LEnKF) utilizing covariance localization to remove
Hafnium and neodymium isotope composition of seawater and filtered particles from the Southern Ocean
NASA Astrophysics Data System (ADS)
Stichel, T.; Frank, M.; Haley, B. A.; Rickli, J.; Venchiarutti, C.
2009-12-01
Radiogenic hafnium (Hf) and neodymium (Nd) isotopes have been used as tracers for past continental weathering regimes and ocean circulation. To date, however, there are only very few data available on dissolved Hf isotope compositions in present-day seawater and there is a complete lack of particulate data. During expedition ANTXXIV/3 (February to April 2008) we collected particulate samples (> 0.8 µm), which were obtained by filtrations of 270-700 liters of water. The samples were separated from the filters, completely dissolved, and purified for Nd and Hf isotope determination by TIMS and MC-ICPMS, respectively. In addition, we collected filtered (0.45 µm) seawater samples (20-120 liters) to determine the dissolved isotopic composition of Hf and Nd. The Hf isotope composition of the particulate fraction in the Drake Passage ranged from 0 to -28 ɛHf and is thus similar to that observed in core top sediments from the entire Southern Ocean in a previous study. The most unradiogenic and isotopically homogenous Hf isotope compositions in our study were found near the Antarctic Peninsula. Most of the stations north of the Southern Antarctic Circumpolar Front (SACC) show a large variation in ɛHf between 0 and -23 within the water column of one station and between the stations. The locations at which these Hf isotope compositions were measured are mostly far away from the potential source areas. Nd, in contrast, was nearly absent throughout the entire sample set and the only measurable ɛNd data ranged from 0 to -7, which is in good agreement with the sediment data in that area. The dissolved seawater isotopic compositions of both Hf and Nd show only minor variance (ɛHf = 4.2 to 4.7 and ɛNd = -8.8 to -7.6, respectively). These patterns in Hf isotopes and the nearly complete absence of Nd indicates that the particulate fraction does not contain a lot of terrigeneous material but is almost entirely dominated by biogenic opal. The homogenous and relatively radiogenic
Wakai, Nobuhide; Sumida, Iori; Otani, Yuki; Suzuki, Osamu; Seo, Yuji; Isohashi, Fumiaki; Yoshioka, Yasuo; Ogawa, Kazuhiko; Hasegawa, Masatoshi
2015-05-15
Purpose: The authors sought to determine the optimal collimator leaf margins which minimize normal tissue dose while achieving high conformity and to evaluate differences between the use of a flattening filter-free (FFF) beam and a flattening-filtered (FF) beam. Methods: Sixteen lung cancer patients scheduled for stereotactic body radiotherapy underwent treatment planning for a 7 MV FFF and a 6 MV FF beams to the planning target volume (PTV) with a range of leaf margins (−3 to 3 mm). Forty grays per four fractions were prescribed as a PTV D95. For PTV, the heterogeneity index (HI), conformity index, modified gradient index (GI), defined as the 50% isodose volume divided by target volume, maximum dose (Dmax), and mean dose (Dmean) were calculated. Mean lung dose (MLD), V20 Gy, and V5 Gy for the lung (defined as the volumes of lung receiving at least 20 and 5 Gy), mean heart dose, and Dmax to the spinal cord were measured as doses to organs at risk (OARs). Paired t-tests were used for statistical analysis. Results: HI was inversely related to changes in leaf margin. Conformity index and modified GI initially decreased as leaf margin width increased. After reaching a minimum, the two values then increased as leaf margin increased (“V” shape). The optimal leaf margins for conformity index and modified GI were −1.1 ± 0.3 mm (mean ± 1 SD) and −0.2 ± 0.9 mm, respectively, for 7 MV FFF compared to −1.0 ± 0.4 and −0.3 ± 0.9 mm, respectively, for 6 MV FF. Dmax and Dmean for 7 MV FFF were higher than those for 6 MV FF by 3.6% and 1.7%, respectively. There was a positive correlation between the ratios of HI, Dmax, and Dmean for 7 MV FFF to those for 6 MV FF and PTV size (R = 0.767, 0.809, and 0.643, respectively). The differences in MLD, V20 Gy, and V5 Gy for lung between FFF and FF beams were negligible. The optimal leaf margins for MLD, V20 Gy, and V5 Gy for lung were −0.9 ± 0.6, −1.1 ± 0.8, and −2.1 ± 1.2 mm, respectively, for 7 MV FFF compared
NASA Astrophysics Data System (ADS)
Sue-Ann, Goh; Ponnambalam, S. G.
This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.
Design Optimization of Pin Fin Geometry Using Particle Swarm Optimization Algorithm
Hamadneh, Nawaf; Khan, Waqar A.; Sathasivam, Saratha; Ong, Hong Choon
2013-01-01
Particle swarm optimization (PSO) is employed to investigate the overall performance of a pin fin.The following study will examine the effect of governing parameters on overall thermal/fluid performance associated with different fin geometries, including, rectangular plate fins as well as square, circular, and elliptical pin fins. The idea of entropy generation minimization, EGM is employed to combine the effects of thermal resistance and pressure drop within the heat sink. A general dimensionless expression for the entropy generation rate is obtained by considering a control volume around the pin fin including base plate and applying the conservations equations for mass and energy with the entropy balance. Selected fin geometries are examined for the heat transfer, fluid friction, and the minimum entropy generation rate corresponding to different parameters including axis ratio, aspect ratio, and Reynolds number. The results clearly indicate that the preferred fin profile is very dependent on these parameters. PMID:23741525
Design optimization of pin fin geometry using particle swarm optimization algorithm.
Hamadneh, Nawaf; Khan, Waqar A; Sathasivam, Saratha; Ong, Hong Choon
2013-01-01
Particle swarm optimization (PSO) is employed to investigate the overall performance of a pin fin.The following study will examine the effect of governing parameters on overall thermal/fluid performance associated with different fin geometries, including, rectangular plate fins as well as square, circular, and elliptical pin fins. The idea of entropy generation minimization, EGM is employed to combine the effects of thermal resistance and pressure drop within the heat sink. A general dimensionless expression for the entropy generation rate is obtained by considering a control volume around the pin fin including base plate and applying the conservations equations for mass and energy with the entropy balance. Selected fin geometries are examined for the heat transfer, fluid friction, and the minimum entropy generation rate corresponding to different parameters including axis ratio, aspect ratio, and Reynolds number. The results clearly indicate that the preferred fin profile is very dependent on these parameters. PMID:23741525
Far-infrared filters utilizing small particle scattering and antireflection coatings
NASA Technical Reports Server (NTRS)
Armstrong, K. R.; Low, F. J.
1974-01-01
High-transmission, low-pass scatter filters for blocking at wavelengths from 3.5 to 50 microns and single-layer antireflection coatings for optical materials used in the 25- to 300-micron region of the spectrum are described. The application of both techniques to liquid-He-cooled filters permits the construction of efficient low-pass and medium-width bandpass filters for use throughout the far infrared.
The determination and optimization of (rutile) pigment particle size distributions
NASA Technical Reports Server (NTRS)
Richards, L. W.
1972-01-01
A light scattering particle size test which can be used with materials having a broad particle size distribution is described. This test is useful for pigments. The relation between the particle size distribution of a rutile pigment and its optical performance in a gray tint test at low pigment concentration is calculated and compared with experimental data.
Todt, Daniel; Jenssen, Petter D; Klemenčič, Aleksandra Krivograd; Oarga, Andreea; B