Iris recognition using Gabor filters optimized by the particle swarm algorithm
NASA Astrophysics Data System (ADS)
Tsai, Chung-Chih; Taur, Jin-Shiuh; Tao, Chin-Wang
2009-04-01
An efficient feature extraction algorithm based on optimized Gabor filters and a relative variation analysis approach is proposed for iris recognition. The Gabor filters are optimized by using the particle swarm algorithm to adjust the parameters. Moreover, a sequential scheme is developed to determine the number of filters in the optimal Gabor filter bank. In the preprocessing step, the lower part of the iris image is unwrapped and normalized to a rectangular block that is then decomposed by the optimal Gabor filters. After that, a simple encoding method is adopted to generate a compact iris code. Experimental results show that with a smaller iris code size, the proposed method can produce comparable performance to that of the existing iris recognition systems.
Yang, Chenying; Hong, Liang; Shen, Weidong; Zhang, Yueguang; Liu, Xu; Zhen, Hongyu
2013-04-22
We propose three color filters (red, green, blue) based on a two-dimensional (2D) grating, which maintain the same perceived specular colors for a broad range of incident angles with the average polarization. Particle swarm optimization (PSO) method is employed to design these filters for the first time to our knowledge. Two merit functions involving the reflectance curves and color difference in CIEDE2000 formula are respectively constructed to adjust the structural parameters during the optimization procedure. Three primary color filters located at 637nm, 530nm and 446nm with high saturation are obtained with the peak reflectance of 89%, 83%, 66%. The reflectance curves at different incident angles are coincident and the color difference is less than 8 for the incident angle up to 45°. The electric field distribution of the structure is finally studied to analyze the optical property.
Optimizing Parameters of Process-Based Terrestrial Ecosystem Model with Particle Filter
NASA Astrophysics Data System (ADS)
Ito, A.
2014-12-01
Present terrestrial ecosystem models still contain substantial uncertainties, as model intercomparison studies have shown, because of poor model constraint by observational data. So, development of advanced methodology of data-model fusion, or data-assimilation, is an important task to reduce the uncertainties and improve model predictability. In this study, I apply the Particle filter (or Sequential Monte Carlo filer) to optimize parameters of a process-based terrestrial ecosystem model (VISIT). The Particle filter is one of the data-assimilation methods, in which probability distribution of model state is approximated by many samples of parameter set (i.e., particle). This is a computationally intensive method and applicable to nonlinear systems; this is an advantage of the method in comparison with other techniques like Ensemble Kalman filter and variational method. At several sites, I used flux measurement data of atmosphere-ecosystem CO2 exchange in sequential and non-sequential manners. In the sequential data assimilation, a time-series data at 30-min or daily steps were used to optimize gas-exchange-related parameters; this method would be also effective to assimilate satellite observational data. On the other hand, in the non-sequential case, annual or long-term mean budget was adjusted to observations; this method would be also effective to assimilate carbon stock data. Although there remain technical issues (e.g., appropriate number of particles and likelihood function), I demonstrate that the Partile filter is an effective method of data-assimilation for process-based models, enhancing collaboration between field and model researchers.
An Optimal Observing System Study for the Kuroshio Extension using Particle Filters
NASA Astrophysics Data System (ADS)
Kramer, Werner; van Leeuwen, Peter Jan; Pierieni, Stefano; Dijkstra, Henk
2010-05-01
The Kuroshio Extension - the eastward-flowing free jet formed when the warm waters of the Kuroshio separate from the Japanese coast - reveals bimodal behavior. It changes from an elongated, energetic meandering jet into a weaker, unstable jet with a reduced zonal penetration. Many of its characteristics, e.g. the decadal period and the more stable character of the elongated state, are also observed in a reduced-gravity ocean model of the northern Pacific basin with a schematic Japanese coastline driven by a constant double-gyre wind field. The success of this idealized model suggests that intrinsic nonlinear mechanisms play a major role in determining the meander pattern of the mean flow. The low complexity of the model makes it ideal to perform an observing system study. Here, we take a new approach by using particle filters to assimilate observations into the model. An ensemble of model states is integrated over time from an initial distribution. The first approach is to pick one run as the synthetic truth. Observations are produced from this synthetic truth with an additional observation error. The particle filter technique adjusts the weight of each ensemble run - each particle - according to the observation value and the error distribution. From the ensemble and its weight distribution the expectation and probability distribution of the state vector can be computed. As the ensemble itself is not altered by the filter, different sets of observations, e.g. with different geometrical configurations, locations and/or time resolutions, can be analyzed a posteriori. The particle filter analyses allows us to identify which observations have a large impact on reconstructing the true state of Kuroshio Extension. More precisely, which observations contribute to a (local) reduction in the entropy of the ensemble. In a way each observation is then linked to an area of influence, which permits for determining the flow of information. We will present results where
Optimization of integrated polarization filters.
Gagnon, Denis; Dumont, Joey; Déziel, Jean-Luc; Dubé, Louis J
2014-10-01
This study reports on the design of small footprint, integrated polarization filters based on engineered photonic lattices. Using a rods-in-air lattice as a basis for a TE filter and a holes-in-slab lattice for the analogous TM filter, we are able to maximize the degree of polarization of the output beams up to 98% with a transmission efficiency greater than 75%. The proposed designs allow not only for logical polarization filtering, but can also be tailored to output an arbitrary transverse beam profile. The lattice configurations are found using a recently proposed parallel tabu search algorithm for combinatorial optimization problems in integrated photonics.
Towards robust particle filters for high-dimensional systems
NASA Astrophysics Data System (ADS)
van Leeuwen, Peter Jan
2015-04-01
In recent years particle filters have matured and several variants are now available that are not degenerate for high-dimensional systems. Often they are based on ad-hoc combinations with Ensemble Kalman Filters. Unfortunately it is unclear what approximations are made when these hybrids are used. The proper way to derive particle filters for high-dimensional systems is exploring the freedom in the proposal density. It is well known that using an Ensemble Kalman Filter as proposal density (the so-called Weighted Ensemble Kalman Filter) does not work for high-dimensional systems. However, much better results are obtained when weak-constraint 4Dvar is used as proposal, leading to the implicit particle filter. Still this filter is degenerate when the number of independent observations is large. The Equivalent-Weights Particle Filter is a filter that works well in systems of arbitrary dimensions, but it contains a few tuning parameters that have to be chosen well to avoid biases. In this paper we discuss ways to derive more robust particle filters for high-dimensional systems. Using ideas from large-deviation theory and optimal transportation particle filters will be generated that are robust and work well in these systems. It will be shown that all successful filters can be derived from one general framework. Also, the performance of the filters will be tested on simple but high-dimensional systems, and, if time permits, on a high-dimensional highly nonlinear barotropic vorticity equation model.
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
Gaussian particle flow implementation of PHD filter
NASA Astrophysics Data System (ADS)
Zhao, Lingling; Wang, Junjie; Li, Yunpeng; Coates, Mark J.
2016-05-01
Particle filter and Gaussian mixture implementations of random finite set filters have been proposed to tackle the issue of jointly estimating the number of targets and their states. The Gaussian mixture PHD (GM-PHD) filter has a closed-form expression for the PHD for linear and Gaussian target models, and extensions using the extended Kalman filter or unscented Kalman Filter have been developed to allow the GM-PHD filter to accommodate mildly nonlinear dynamics. Errors resulting from linearization or model mismatch are unavoidable. A particle filter implementation of the PHD filter (PF-PHD) is more suitable for nonlinear and non-Gaussian target models. The particle filter implementations are much more computationally expensive and performance can suffer when the proposal distribution is not a good match to the posterior. In this paper, we propose a novel implementation of the PHD filter named the Gaussian particle flow PHD filter (GPF-PHD). It employs a bank of particle flow filters to approximate the PHD; these play the same role as the Gaussian components in the GM-PHD filter but are better suited to non-linear dynamics and measurement equations. Using the particle flow filter allows the GPF-PHD filter to migrate particles to the dense regions of the posterior, which leads to higher eﬃciency than the PF-PHD. We explore the performance of the new algorithm through numerical simulations.
NASA Astrophysics Data System (ADS)
Rafiee, Mohammad; Barrau, Axel; Bayen, Alexandre M.
2013-06-01
This article investigates the performance of Monte Carlo-based estimation methods for estimation of flow state in large-scale open channel networks. After constructing a state space model of the flow based on the Saint-Venant equations, we implement the optimal sampling importance resampling filter to perform state estimation in a case in which measurements are available at every time step. Considering a case in which measurements become available intermittently, a random-map implementation of the implicit particle filter is applied to estimate the state trajectory in the interval between the measurements. Finally, some heuristics are proposed, which are shown to improve the estimation results and lower the computational cost. In the first heuristics, considering the case in which measurements are available at every time step, we apply the implicit particle filter over time intervals of a desired size while incorporating all the available measurements over the corresponding time interval. As a second heuristic method, we introduce a maximum a posteriori (MAP) method, which does not require sampling. It will be seen, through implementation, that the MAP method provides more accurate results in the case of our application while having a smaller computational cost. All estimation methods are tested on a network of 19 tidally forced subchannels and 1 reservoir, Clifton Court Forebay, in Sacramento-San Joaquin Delta in California, and numerical results are presented.
Westinghouse Advanced Particle Filter System
Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.; Bachovchin, D.M.
1996-12-31
Integrated Gasification Combined Cycles (IGCC) and Pressurized Fluidized Bed Combustion (PFBC) are being developed and demonstrated for commercial, power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC and PFBC in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of PFBC and IGCC systems. This paper reports on the development and status of testing of the Westinghouse Advanced Hot Gas Particle Filter (W-APF) including: W-APF integrated operation with the American Electric Power, 70 MW PFBC clean coal facility--approximately 6000 test hours completed; approximately 2500 hours of testing at the Hans Ahlstrom 10 MW PCFB facility located in Karhula, Finland; over 700 hours of operation at the Foster Wheeler 2 MW 2nd generation PFBC facility located in Livingston, New Jersey; status of Westinghouse HGF supply for the DOE Southern Company Services Power System Development Facility (PSDF) located in Wilsonville, Alabama; the status of the Westinghouse development and testing of HGF`s for Biomass Power Generation; and the status of the design and supply of the HGF unit for the 95 MW Pinon Pine IGCC Clean Coal Demonstration.
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry
System and Apparatus for Filtering Particles
NASA Technical Reports Server (NTRS)
Agui, Juan H. (Inventor); Vijayakumar, Rajagopal (Inventor)
2015-01-01
A modular pre-filtration apparatus may be beneficial to extend the life of a filter. The apparatus may include an impactor that can collect a first set of particles in the air, and a scroll filter that can collect a second set of particles in the air. A filter may follow the pre-filtration apparatus, thus causing the life of the filter to be increased.
Angle only tracking with particle flow filters
NASA Astrophysics Data System (ADS)
Daum, Fred; Huang, Jim
2011-09-01
We show the results of numerical experiments for tracking ballistic missiles using only angle measurements. We compare the performance of an extended Kalman filter with a new nonlinear filter using particle flow to compute Bayes' rule. For certain difficult geometries, the particle flow filter is an order of magnitude more accurate than the EKF. Angle only tracking is of interest in several different sensors; for example, passive optics and radars in which range and Doppler data are spoiled by jamming.
OPTIMIZATION OF ADVANCED FILTER SYSTEMS
R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar
2002-06-30
Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program
Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer
2015-01-01
The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m−2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m−2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037
Particle filtering for passive fathometer tracking.
Michalopoulou, Zoi-Heleni; Yardim, Caglar; Gerstoft, Peter
2012-01-01
Seabed interface depths and fathometer amplitudes are tracked for an unknown and changing number of sub-bottom reflectors. This is achieved by incorporating conventional and adaptive fathometer processors into sequential Monte Carlo methods for a moving vertical line array. Sediment layering information and time-varying fathometer response amplitudes are tracked by using a multiple model particle filter with an uncertain number of reflectors. Results are compared to a classical particle filter where the number of reflectors is considered to be known. Reflector tracking is demonstrated for both conventional and adaptive processing applied to the drifting array data from the Boundary 2003 experiment. The layering information is successfully tracked by the multiple model particle filter even for noisy fathometer outputs.
Quantitative filter forensics for indoor particle sampling.
Haaland, D; Siegel, J A
2017-03-01
Filter forensics is a promising indoor air investigation technique involving the analysis of dust which has collected on filters in central forced-air heating, ventilation, and air conditioning (HVAC) or portable systems to determine the presence of indoor particle-bound contaminants. In this study, we summarize past filter forensics research to explore what it reveals about the sampling technique and the indoor environment. There are 60 investigations in the literature that have used this sampling technique for a variety of biotic and abiotic contaminants. Many studies identified differences between contaminant concentrations in different buildings using this technique. Based on this literature review, we identified a lack of quantification as a gap in the past literature. Accordingly, we propose an approach to quantitatively link contaminants extracted from HVAC filter dust to time-averaged integrated air concentrations. This quantitative filter forensics approach has great potential to measure indoor air concentrations of a wide variety of particle-bound contaminants. Future studies directly comparing quantitative filter forensics to alternative sampling techniques are required to fully assess this approach, but analysis of past research suggests the enormous possibility of this approach.
Adaptive Mallow's optimization for weighted median filters
NASA Astrophysics Data System (ADS)
Rachuri, Raghu; Rao, Sathyanarayana S.
2002-05-01
This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.
Analyzing Meteoroid Flights Using Particle Filters
NASA Astrophysics Data System (ADS)
Sansom, E. K.; Rutten, M. G.; Bland, P. A.
2017-02-01
Fireball observations from camera networks provide position and time information along the trajectory of a meteoroid that is transiting our atmosphere. The complete dynamical state of the meteoroid at each measured time can be estimated using Bayesian filtering techniques. A particle filter is a novel approach to modeling the uncertainty in meteoroid trajectories and incorporates errors in initial parameters, the dynamical model used, and observed position measurements. Unlike other stochastic approaches, a particle filter does not require predefined values for initial conditions or unobservable trajectory parameters. The Bunburra Rockhole fireball, observed by the Australian Desert Fireball Network (DFN) in 2007, is used to determine the effectiveness of a particle filter for use in fireball trajectory modeling. The final mass is determined to be 2.16+/- 1.33 {kg} with a final velocity of 6030+/- 216 {{m}} {{{s}}}-1, similar to previously calculated values. The full automatability of this approach will allow an unbiased evaluation of all events observed by the DFN and lead to a better understanding of the dynamical state and size frequency distribution of asteroid and cometary debris in the inner solar system.
Ultrasonic tracking of shear waves using a particle filter
Ingle, Atul N.; Ma, Chi; Varghese, Tomy
2015-01-01
Purpose: This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Methods: Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Results: Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. Conclusions: The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques. PMID:26520761
Optimal multiobjective design of digital filters using spiral optimization technique.
Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid
2013-01-01
The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use.
Optimal Multiobjective Design of Digital Filters Using Taguchi Optimization Technique
NASA Astrophysics Data System (ADS)
Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid
2014-01-01
The multiobjective design of digital filters using the powerful Taguchi optimization technique is considered in this paper. This relatively new optimization tool has been recently introduced to the field of engineering and is based on orthogonal arrays. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the Taguchi optimization technique produced filters that fulfill the desired characteristics and are of practical use.
Optimizing internal structure of membrane filters
NASA Astrophysics Data System (ADS)
Cummings, Linda; Sanaei, Pejman
2016-11-01
Membrane filters are in widespread use, and manufacturers have considerable interest in improving their performance, in terms of particle retention properties, and total throughput over the filter lifetime. In this regard, it has long been known that membrane properties should not be uniform over the membrane depth; rather, membrane permeability should decrease in the direction of flow. While much research effort has been focused on investigating favorable membrane permeability gradients, this work has been largely empirical in nature. We present a simple, first-principles model for flow through and fouling of a membrane filter, accounting for permeability gradients via variable pore size. Our model accounts for two fouling modes: sieving; and particle adsorption within pores. For filtration driven by a fixed pressure drop, flux through the membrane eventually goes to zero, as fouling occurs and pores close. We address issues of filter performance as the internal pore structure is varied, by comparing the total throughput obtained with equal-resistance membranes. Within certain classes of pore profiles we are able to find the optimum pore profile that maximizes total throughput over the filter lifetime, while maintaining acceptable particle removal from the feed. Partial support from NSF DMS 1261596 is gratefully acknowledged.
Point Set Registration via Particle Filtering and Stochastic Dynamics
Sandhu, Romeil; Dambreville, Samuel; Tannenbaum, Allen
2013-01-01
In this paper, we propose a particle filtering approach for the problem of registering two point sets that differ by a rigid body transformation. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in pose parameters obtained by running a few iterations of a certain local optimizer. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer approaches for registration. Thus, the novelty of our method is threefold: First, we employ a particle filtering scheme to drive the point set registration process. Second, we present a local optimizer that is motivated by the correlation measure. Third, we increase the robustness of the registration performance by introducing a dynamic model of uncertainty for the transformation parameters. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity (with respect to particle size) as well as maintains the temporal coherency of the state (no loss of information). Also unlike some alternative approaches for point set registration, we make no geometric assumptions on the two data sets. Experimental results are provided that demonstrate the robustness of the algorithm to initialization, noise, missing structures, and/or differing point densities in each set, on several challenging 2D and 3D registration scenarios. PMID:20558877
Desensitized Optimal Filtering and Sensor Fusion Toolkit
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.
2015-01-01
Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
NASA Technical Reports Server (NTRS)
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the
Incremental Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Xu, Xiaohua; Pan, Zhoujin; Xi, Yanqiu; Chen, Ling
By simulating the population size of the human evolution, a PSO algorithm with increment of particle size (IPPSO) was proposed. Without changing the PSO operations, IPPSO can obtain better solutions with less time cost by modifying the structure of traditional PSO. Experimental results show that IPPSO using logistic model is more efficient and requires less computation time than using linear function in solving more complex program problems.
Groupwise surface correspondence using particle filtering
NASA Astrophysics Data System (ADS)
Li, Guangxu; Kim, Hyoungseop; Tan, Joo Kooi; Ishikawa, Seiji
2015-03-01
To obtain an effective interpretation of organic shape using statistical shape models (SSMs), the correspondence of the landmarks through all the training samples is the most challenging part in model building. In this study, a coarse-tofine groupwise correspondence method for 3-D polygonal surfaces is proposed. We manipulate a reference model in advance. Then all the training samples are mapped to a unified spherical parameter space. According to the positions of landmarks of the reference model, the candidate regions for correspondence are chosen. Finally we refine the perceptually correct correspondences between landmarks using particle filter algorithm, where the likelihood of local surface features are introduced as the criterion. The proposed method was performed on the correspondence of 9 cases of left lung training samples. Experimental results show the proposed method is flexible and under-constrained.
State estimation and prediction using clustered particle filters
Lee, Yoonsang; Majda, Andrew J.
2016-01-01
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Gebremichael, M.; LEE, H.; Hopson, T. M.
2012-12-01
Hydrologic data assimilation techniques provide a means to improve river discharge forecasts through updating hydrologic model states and correcting the atmospheric forcing data via optimally combining model outputs with observations. The performance of the assimilation procedure, however, depends on the data assimilation techniques used and the amount of uncertainty in the data sets. To investigate the effects of these, we comparatively evaluate three data assimilation techniques, including ensemble Kalman filter (EnKF), particle filter (PF) and variational (VAR) technique, which assimilate discharge and synthetic soil moisture data at various uncertainty levels into the Sacramento Soil Moisture accounting (SAC-SMA) model used by the National Weather Service (NWS) for river forecasting in The United States. The study basin is Greens Bayou watershed with area of 178 km2 in eastern Texas. In the presentation, we summarize the results of the comparisons, and discuss the challenges of applying each technique for hydrologic applications.
A backtracking algorithm that deals with particle filter degeneracy
NASA Astrophysics Data System (ADS)
Baarsma, Rein; Schmitz, Oliver; Karssenberg, Derek
2016-04-01
Particle filters are an excellent way to deal with stochastic models incorporating Bayesian data assimilation. While they are computationally demanding, the particle filter has no problem with nonlinearity and it accepts non-Gaussian observational data. In the geoscientific field it is this computational demand that creates a problem, since dynamic grid-based models are often already quite computationally demanding. As such it is of the utmost importance to keep the amount of samples in the filter as small as possible. Small sample populations often lead to filter degeneracy however, especially in models with high stochastic forcing. Filter degeneracy renders the sample population useless, as the population is no longer statistically informative. We have created an algorithm in an existing data assimilation framework that reacts to and deals with filter degeneracy based on Spiller et al. [2008]. During the Bayesian updating step of the standard particle filter, the algorithm tests the sample population for filter degeneracy. If filter degeneracy has occurred, the algorithm resets to the last time the filter did work correctly and recalculates the failed timespan of the filter with an increased sample population. The sample population is then reduced to its original size and the particle filter continues as normal. This algorithm was created in the PCRaster Python framework, an open source tool that enables spatio-temporal forward modelling in Python [Karssenberg et al., 2010] . The framework already contains several data assimilation algorithms, including a standard particle filter and a Kalman filter. The backtracking particle filter algorithm has been added to the framework, which will make it easy to implement in other research. The performance of the backtracking particle filter is tested against a standard particle filter using two models. The first is a simple nonlinear point model, and the second is a more complex geophysical model. The main testing
A Parallel Particle Swarm Optimizer
2003-01-01
by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based...concurrent computation. The parallelization of the Particle Swarm Optimization (PSO) algorithm is detailed and its performance and characteristics demonstrated for the biomechanical system identification problem as example.
GNSS data filtering optimization for ionospheric observation
NASA Astrophysics Data System (ADS)
D'Angelo, G.; Spogli, L.; Cesaroni, C.; Sgrigna, V.; Alfonsi, L.; Aquino, M. H. O.
2015-12-01
In the last years, the use of GNSS (Global Navigation Satellite Systems) data has been gradually increasing, for both scientific studies and technological applications. High-rate GNSS data, able to generate and output 50-Hz phase and amplitude samples, are commonly used to study electron density irregularities within the ionosphere. Ionospheric irregularities may cause scintillations, which are rapid and random fluctuations of the phase and the amplitude of the received GNSS signals. For scintillation analysis, usually, GNSS signals observed at an elevation angle lower than an arbitrary threshold (usually 15°, 20° or 30°) are filtered out, to remove the possible error sources due to the local environment where the receiver is deployed. Indeed, the signal scattered by the environment surrounding the receiver could mimic ionospheric scintillation, because buildings, trees, etc. might create diffusion, diffraction and reflection. Although widely adopted, the elevation angle threshold has some downsides, as it may under or overestimate the actual impact of multipath due to local environment. Certainly, an incorrect selection of the field of view spanned by the GNSS antenna may lead to the misidentification of scintillation events at low elevation angles. With the aim to tackle the non-ionospheric effects induced by multipath at ground, in this paper we introduce a filtering technique, termed SOLIDIFY (Standalone OutLiers IDentIfication Filtering analYsis technique), aiming at excluding the multipath sources of non-ionospheric origin to improve the quality of the information obtained by the GNSS signal in a given site. SOLIDIFY is a statistical filtering technique based on the signal quality parameters measured by scintillation receivers. The technique is applied and optimized on the data acquired by a scintillation receiver located at the Istituto Nazionale di Geofisica e Vulcanologia, in Rome. The results of the exercise show that, in the considered case of a noisy
Blended particle filters for large-dimensional chaotic dynamical systems.
Majda, Andrew J; Qi, Di; Sapsis, Themistoklis P
2014-05-27
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below.
Motion-compensated speckle tracking via particle filtering
NASA Astrophysics Data System (ADS)
Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu
2015-07-01
Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.
Modular particle filtering FPGA hardware architecture for brain machine interfaces.
Mountney, John; Obeid, Iyad; Silage, Dennis
2011-01-01
As the computational complexities of neural decoding algorithms for brain machine interfaces (BMI) increase, their implementation through sequential processors becomes prohibitive for real-time applications. This work presents the field programmable gate array (FPGA) as an alternative to sequential processors for BMIs. The reprogrammable hardware architecture of the FPGA provides a near optimal platform for performing parallel computations in real-time. The scalability and reconfigurability of the FPGA accommodates diverse sets of neural ensembles and a variety of decoding algorithms. Throughput is significantly increased by decomposing computations into independent parallel hardware modules on the FPGA. This increase in throughput is demonstrated through a parallel hardware implementation of the auxiliary particle filtering signal processing algorithm.
Symmetric Phase-Only Filtering in Particle-Image Velocimetry
NASA Technical Reports Server (NTRS)
Wemet, Mark P.
2008-01-01
and second- image subregions are normalized by the square roots of their respective magnitudes. This scheme yields optimal performance because the amounts of normalization applied to the spatial-frequency contents of the input and filter scenes are just enough to enhance their high-spatial-frequency contents while reducing their spurious low-spatial-frequency content. As a result, in SPOF PIV processing, particle-displacement correlation peaks can readily be detected above spurious background peaks, without need for masking or background subtraction.
Metal finishing wastewater pressure filter optimization
Norford, S.W.; Diener, G.A.; Martin, H.L.
1992-01-01
The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.
Metal finishing wastewater pressure filter optimization
Norford, S.W.; Diener, G.A.; Martin, H.L.
1992-12-31
The 300-M Area Liquid Effluent Treatment Facility (LETF) of the Savannah River Site (SRS) is an end-of-pipe industrial wastewater treatment facility, that uses precipitation and filtration which is the EPA Best Available Technology economically achievable for a Metal Finishing and Aluminum Form Industries. The LETF consists of three close-coupled treatment facilities: the Dilute Effluent Treatment Facility (DETF), which uses wastewater equalization, physical/chemical precipitation, flocculation, and filtration; the Chemical Treatment Facility (CTF), which slurries the filter cake generated from the DETF and pumps it to interim-StatuS RCRA storage tanks; and the Interim Treatment/Storage Facility (IT/SF) which stores the waste from the CTF until the waste is stabilized/solidified for permanent disposal, 85% of the stored waste is from past nickel plating and aluminum canning of depleted uranium targets for the SRS nuclear reactors. Waste minimization and filtration efficiency are key to cost effective treatment of the supernate, because the waste filter cake generated is returned to the IT/SF. The DETF has been successfully optimized to achieve maximum efficiency and to minimize waste generation.
Optimal filters for detecting cosmic bubble collisions
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Johnson, M. C.; Peiris, H. V.
2012-05-01
A number of well-motivated extensions of the ΛCDM concordance cosmological model postulate the existence of a population of sources embedded in the cosmic microwave background. One such example is the signature of cosmic bubble collisions which arise in models of eternal inflation. The most unambiguous way to test these scenarios is to evaluate the full posterior probability distribution of the global parameters defining the theory; however, a direct evaluation is computationally impractical on large datasets, such as those obtained by the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck. A method to approximate the full posterior has been developed recently, which requires as an input a set of candidate sources which are most likely to give the largest contribution to the likelihood. In this article, we present an improved algorithm for detecting candidate sources using optimal filters, and apply it to detect candidate bubble collision signatures in WMAP 7-year observations. We show both theoretically and through simulations that this algorithm provides an enhancement in sensitivity over previous methods by a factor of approximately two. Moreover, no other filter-based approach can provide a superior enhancement of these signatures. Applying our algorithm to WMAP 7-year observations, we detect eight new candidate bubble collision signatures for follow-up analysis.
Optimization of photon correlations by frequency filtering
NASA Astrophysics Data System (ADS)
González-Tudela, Alejandro; del Valle, Elena; Laussy, Fabrice P.
2015-04-01
Photon correlations are a cornerstone of quantum optics. Recent works [E. del Valle, New J. Phys. 15, 025019 (2013), 10.1088/1367-2630/15/2/025019; A. Gonzalez-Tudela et al., New J. Phys. 15, 033036 (2013), 10.1088/1367-2630/15/3/033036; C. Sanchez Muñoz et al., Phys. Rev. A 90, 052111 (2014), 10.1103/PhysRevA.90.052111] have shown that by keeping track of the frequency of the photons, rich landscapes of correlations are revealed. Stronger correlations are usually found where the system emission is weak. Here, we characterize both the strength and signal of such correlations, through the introduction of the "frequency-resolved Mandel parameter." We study a plethora of nonlinear quantum systems, showing how one can substantially optimize correlations by combining parameters such as pumping, filtering windows and time delay.
Application of the implicit particle filter to a model of nearshore circulation
NASA Astrophysics Data System (ADS)
Miller, R. N.; Ehret, L. L.
2014-04-01
The implicit particle filter is applied to a stochastically forced shallow water model of nearshore flow, and found to produce reliable state estimates with tens of particles. The state vector of this model consists of a height anomaly and two horizontal velocity components at each point on a 128 × 98 regular rectangular grid, making for a state dimension O(104). The particle filter was applied to the model with two parameter choices representing two distinct dynamical regimes, and performed well in both. Demands on computing resources were manageable. Simulations with as many as a hundred particles ran overnight on a modestly configured workstation. In this case of observations defined by a linear function of the state vector, taken every time step of the numerical model, the implicit particle filter is equivalent to the optimal importance filter, i.e., at each step any given particle is drawn from the density of the system conditioned jointly upon observations and the state of that particle at the previous time. Even in this ideal case, the sample occasionally collapses to a single particle, and resampling is necessary. In those cases, the sample rapidly reinflates, and the analysis never loses track. In both dynamical regimes, the ensembles of particles deviated significantly from normality.
Genetic particle filter application to land surface temperature downscaling
NASA Astrophysics Data System (ADS)
Mechri, Rihab; Ottlé, Catherine; Pannekoucke, Olivier; Kallel, Abdelaziz
2014-03-01
Thermal infrared data are widely used for surface flux estimation giving the possibility to assess water and energy budgets through land surface temperature (LST). Many applications require both high spatial resolution (HSR) and high temporal resolution (HTR), which are not presently available from space. It is therefore necessary to develop methodologies to use the coarse spatial/high temporal resolutions LST remote-sensing products for a better monitoring of fluxes at appropriate scales. For that purpose, a data assimilation method was developed to downscale LST based on particle filtering. The basic tenet of our approach is to constrain LST dynamics simulated at both HSR and HTR, through the optimization of aggregated temperatures at the coarse observation scale. Thus, a genetic particle filter (GPF) data assimilation scheme was implemented and applied to a land surface model which simulates prior subpixel temperatures. First, the GPF downscaling scheme was tested on pseudoobservations generated in the framework of the study area landscape (Crau-Camargue, France) and climate for the year 2006. The GPF performances were evaluated against observation errors and temporal sampling. Results show that GPF outperforms prior model estimations. Finally, the GPF method was applied on Spinning Enhanced Visible and InfraRed Imager time series and evaluated against HSR data provided by an Advanced Spaceborne Thermal Emission and Reflection Radiometer image acquired on 26 July 2006. The temperatures of seven land cover classes present in the study area were estimated with root-mean-square errors less than 2.4 K which is a very promising result for downscaling LST satellite products.
COMPUTATIONS ON THE PERFORMANCE OF PARTICLE FILTERS AND ELECTRONIC AIR CLEANERS
The paper discusses computations on the performance of particle filters and electronic air cleaners (EACs). The collection efficiency of particle filters and ACs is calculable if certain factors can be assumed or calibrated. For fibrous particulate filters, measurement of colle...
Probabilistic-based approach to optimal filtering
Hannachi
2000-04-01
The signal-to-noise ratio maximizing approach in optimal filtering provides a robust tool to detect signals in the presence of colored noise. The method fails, however, when the data present a regimelike behavior. An approach is developed in this manuscript to recover local (in phase space) behavior in an intermittent regimelike behaving system. The method is first formulated in its general form within a Gaussian framework, given an estimate of the noise covariance, and demands that the signal corresponds to minimizing the noise probability distribution for any given value, i.e., on isosurfaces, of the data probability distribution. The extension to the non-Gaussian case is provided through the use of finite mixture models for data that show regimelike behavior. The method yields the correct signal when applied in a simplified manner to synthetic time series with and without regimes, compared to the signal-to-noise ratio approach, and helps identify the right frequency of the oscillation spells in the classical and variants of the Lorenz system.
NASA Astrophysics Data System (ADS)
Sambaer, Wannes; Zatloukal, Martin; Kimmer, Dusan
2013-04-01
Realistic SEM image based 3D filter model considering transition/free molecular flow regime, Brownian diffusion, aerodynamic slip, particle-fiber and particle-particle interactions together with a novel Euclidian distance map based methodology for the pressure drop calculation has been utilized for a polyurethane nanofiber based filter prepared via electrospinning process in order to more deeply understand the effect of particle-fiber friction coefficient on filter clogging and basic filter characteristics. Based on the performed theoretical analysis, it has been revealed that the increase in the fiber-particle friction coefficient causes, firstly, more weaker particle penetration in the filter, creation of dense top layers and generation of higher pressure drop (surface filtration) in comparison with lower particle-fiber friction coefficient filter for which deeper particle penetration takes place (depth filtration), secondly, higher filtration efficiency, thirdly, higher quality factor and finally, higher quality factor sensitivity to the increased collected particle mass. Moreover, it has been revealed that even if the particle-fiber friction coefficient is different, the cake morphology is very similar.
Analysis of video-based microscopic particle trajectories using Kalman filtering.
Wu, Pei-Hsun; Agarwal, Ashutosh; Hess, Henry; Khargonekar, Pramod P; Tseng, Yiider
2010-06-16
The fidelity of the trajectories obtained from video-based particle tracking determines the success of a variety of biophysical techniques, including in situ single cell particle tracking and in vitro motility assays. However, the image acquisition process is complicated by system noise, which causes positioning error in the trajectories derived from image analysis. Here, we explore the possibility of reducing the positioning error by the application of a Kalman filter, a powerful algorithm to estimate the state of a linear dynamic system from noisy measurements. We show that the optimal Kalman filter parameters can be determined in an appropriate experimental setting, and that the Kalman filter can markedly reduce the positioning error while retaining the intrinsic fluctuations of the dynamic process. We believe the Kalman filter can potentially serve as a powerful tool to infer a trajectory of ultra-high fidelity from noisy images, revealing the details of dynamic cellular processes.
Optimized multichannel decomposition for texture segmentation using Gabor filter bank
NASA Astrophysics Data System (ADS)
Nezamoddini-Kachouie, Nezamoddin; Alirezaie, Javad
2004-05-01
Texture segmentation and analysis is an important aspect of pattern recognition and digital image processing. Previous approaches to texture analysis and segmentation perform multi-channel filtering by applying a set of filters to the image. In this paper we describe a texture segmentation algorithm based on multi-channel filtering that is optimized using diagonal high frequency residual. Gabor band pass filters with different radial spatial frequencies and different orientations have optimum resolution in time and frequency domain. The image is decomposed by a set of Gabor filters into a number of filtered images; each one contains variation of intensity on a sub-band frequency and orientation. The features extracted by Gabor filters have been applied for image segmentation and analysis. There are some important considerations about filter parameters and filter bank coverage in frequency domain. This filter bank does not completely cover the corners of the frequency domain along the diagonals. In our method we optimize the spatial implementation for the Gabor filter bank considering the diagonal high frequency residual. Segmentation is accomplished by a feedforward backpropagation multi-layer perceptron that is trained by optimized extracted features. After MLP is trained the input image is segmented and each pixel is assigned to the proper class.
A Low Cost Structurally Optimized Design for Diverse Filter Types
Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar
2016-01-01
A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image
A hybrid method for optimization of the adaptive Goldstein filter
NASA Astrophysics Data System (ADS)
Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue
2014-12-01
The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.
Effects of particle size and velocity on burial depth of airborne particles in glass fiber filters
Higby, D.P.
1984-11-01
Air sampling for particulate radioactive material involves collecting airborne particles on a filter and then determining the amount of radioactivity collected per unit volume of air drawn through the filter. The amount of radioactivity collected is frequently determined by directly measuring the radiation emitted from the particles collected on the filter. Counting losses caused by the particle becoming buried in the filter matrix may cause concentrations of airborne particulate radioactive materials to be underestimated by as much as 50%. Furthermore, the dose calculation for inhaled radionuclides will also be affected. The present study was designed to evaluate the extent to which particle size and sampling velocity influence burial depth in glass-fiber filters. Aerosols of high-fired /sup 239/PuO/sub 2/ were collected at various sampling velocities on glass-fiber filters. The fraction of alpha counts lost due to burial was determined as the ratio of activity detected by direct alpha count to the quantity determined by photon spectrometry. The results show that burial of airborne particles collected on glass-fiber filters appears to be a weak function of sampling velocity and particle size. Counting losses ranged from 0 to 25%. A correction that assumes losses of 10 to 15% would ensure that the concentration of airborne alpha-emitting radionuclides would not be underestimated when glass-fiber filters are used. 32 references, 21 figures, 11 tables.
Optimal design of AC filter circuits in HVDC converter stations
Saied, M.M.; Khader, S.A.
1995-12-31
This paper investigates the reactive power as well as the harmonic conditions on both the valve and the AC-network sides of a HVDC converter station. The effect of the AC filter circuits is accurately modeled. The program is then augmented by adding an optimization routine. It can identify the optimal filter configuration, yielding the minimum current distortion factor at the AC network terminals for a prespecified fundamental reactive power to be provided by the filter. Several parameter studies were also conducted to illustrate the effect of accidental or intentional deletion of one of the filter branches.
Optimal filter bandwidth for pulse oximetry.
Stuban, Norbert; Niwayama, Masatsugu
2012-10-01
Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.
Optimal filter bandwidth for pulse oximetry
NASA Astrophysics Data System (ADS)
Stuban, Norbert; Niwayama, Masatsugu
2012-10-01
Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.
Sequential Bearings-Only-Tracking Initiation with Particle Filtering Method
Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation. PMID:24453865
A local particle filter for high-dimensional geophysical systems
NASA Astrophysics Data System (ADS)
Penny, Stephen G.; Miyoshi, Takemasa
2016-11-01
A local particle filter (LPF) is introduced that outperforms traditional ensemble Kalman filters in highly nonlinear/non-Gaussian scenarios, both in accuracy and computational cost. The standard sampling importance resampling (SIR) particle filter is augmented with an observation-space localization approach, for which an independent analysis is computed locally at each grid point. The deterministic resampling approach of Kitagawa is adapted for application locally and combined with interpolation of the analysis weights to smooth the transition between neighboring points. Gaussian noise is applied with magnitude equal to the local analysis spread to prevent particle degeneracy while maintaining the estimate of the growing dynamical instabilities. The approach is validated against the local ensemble transform Kalman filter (LETKF) using the 40-variable Lorenz-96 (L96) model. The results show that (1) the accuracy of LPF surpasses LETKF as the forecast length increases (thus increasing the degree of nonlinearity), (2) the cost of LPF is significantly lower than LETKF as the ensemble size increases, and (3) LPF prevents filter divergence experienced by LETKF in cases with non-Gaussian observation error distributions.
Nonlinear Statistical Signal Processing: A Particle Filtering Approach
Candy, J
2007-09-19
A introduction to particle filtering is discussed starting with an overview of Bayesian inference from batch to sequential processors. Once the evolving Bayesian paradigm is established, simulation-based methods using sampling theory and Monte Carlo realizations are discussed. Here the usual limitations of nonlinear approximations and non-gaussian processes prevalent in classical nonlinear processing algorithms (e.g. Kalman filters) are no longer a restriction to perform Bayesian inference. It is shown how the underlying hidden or state variables are easily assimilated into this Bayesian construct. Importance sampling methods are then discussed and shown how they can be extended to sequential solutions implemented using Markovian state-space models as a natural evolution. With this in mind, the idea of a particle filter, which is a discrete representation of a probability distribution, is developed and shown how it can be implemented using sequential importance sampling/resampling methods. Finally, an application is briefly discussed comparing the performance of the particle filter designs with classical nonlinear filter implementations.
Localization using omnivision-based manifold particle filters
NASA Astrophysics Data System (ADS)
Wong, Adelia; Yousefhussien, Mohammed; Ptucha, Raymond
2015-01-01
Developing precise and low-cost spatial localization algorithms is an essential component for autonomous navigation systems. Data collection must be of sufficient detail to distinguish unique locations, yet coarse enough to enable real-time processing. Active proximity sensors such as sonar and rangefinders have been used for interior localization, but sonar sensors are generally coarse and rangefinders are generally expensive. Passive sensors such as video cameras are low cost and feature-rich, but suffer from high dimensions and excessive bandwidth. This paper presents a novel approach to indoor localization using a low cost video camera and spherical mirror. Omnidirectional captured images undergo normalization and unwarping to a canonical representation more suitable for processing. Training images along with indoor maps are fed into a semi-supervised linear extension of graph embedding manifold learning algorithm to learn a low dimensional surface which represents the interior of a building. The manifold surface descriptor is used as a semantic signature for particle filter localization. Test frames are conditioned, mapped to a low dimensional surface, and then localized via an adaptive particle filter algorithm. These particles are temporally filtered for the final localization estimate. The proposed method, termed omnivision-based manifold particle filters, reduces convergence lag and increases overall efficiency.
Model Adaptation for Prognostics in a Particle Filtering Framework
NASA Technical Reports Server (NTRS)
Saha, Bhaskar; Goebel, Kai Frank
2011-01-01
One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
Object tracking by co-trained classifiers and particle filters
NASA Astrophysics Data System (ADS)
Tang, Liang; Li, Shanqing; Liu, Keyan; Wang, Lei
2010-01-01
This paper presents an online object tracking method, in which co-training and particle filters algorithms cooperate and complement each other for robust and effective tracking. Under framework of particle filters, the semi-supervised cotraining algorithm is adopted to construct, on-line update, and mutually boost two complementary object classifiers, which consequently improves discriminant ability of particles and its adaptability to appearance variants caused by illumination changing, pose verying, camera shaking, and occlusion. Meanwhile, to make sampling procedure more efficient, knowledge from coarse confidence maps and spatial-temporal constraints are introduced by importance sampling. It improves not only the accuracy and efficiency of sampling procedure, but also provides more reliable training samples for co-training. Experimental results verify the effectiveness and robustness of our method.
Distributed Particle Filter for Target Tracking: With Reduced Sensor Communications
Ghirmai, Tadesse
2016-01-01
For efficient and accurate estimation of the location of objects, a network of sensors can be used to detect and track targets in a distributed manner. In nonlinear and/or non-Gaussian dynamic models, distributed particle filtering methods are commonly applied to develop target tracking algorithms. An important consideration in developing a distributed particle filtering algorithm in wireless sensor networks is reducing the size of data exchanged among the sensors because of power and bandwidth constraints. In this paper, we propose a distributed particle filtering algorithm with the objective of reducing the overhead data that is communicated among the sensors. In our algorithm, the sensors exchange information to collaboratively compute the global likelihood function that encompasses the contribution of the measurements towards building the global posterior density of the unknown location parameters. Each sensor, using its own measurement, computes its local likelihood function and approximates it using a Gaussian function. The sensors then propagate only the mean and the covariance of their approximated likelihood functions to other sensors, reducing the communication overhead. The global likelihood function is computed collaboratively from the parameters of the local likelihood functions using an average consensus filter or a forward-backward propagation information exchange strategy. PMID:27618057
Utilizing Time Redundancy for Particle Filter-Based Transfer Alignment
NASA Astrophysics Data System (ADS)
Chattaraj, Suvendu; Mukherjee, Abhik
2016-07-01
Signal detection in the presence of high noise is a challenge in natural sciences. From understanding signals emanating out of deep space probes to signals in protein interactions for systems biology, domain specific innovations are needed. The present work is in the domain of transfer alignment (TA), which deals with estimation of the misalignment of deliverable daughter munitions with respect to that of the delivering mother platform. In this domain, the design of noise filtering scheme has to consider a time varying and nonlinear system dynamics at play. The accuracy of conventional particle filter formulation suffers due to deviations from modeled system dynamics. An evolutionary particle filter can overcome this problem by evolving multiple system models through few support points per particle. However, this variant has even higher time complexity for real-time execution. As a result, measurement update gets deferred and the estimation accuracy is compromised. By running these filter algorithms on multiple processors, the execution time can be reduced, to allow frequent measurement updates. Such scheme ensures better system identification so that performance improves in case of simultaneous ejection of multiple daughters and also results in better convergence of TA algorithms for single daughter.
Multiswarm Particle Swarm Optimization with Transfer of the Best Particle
Wei, Xiao-peng; Zhang, Jian-xia; Zhou, Dong-sheng; Zhang, Qiang
2015-01-01
We propose an improved algorithm, for a multiswarm particle swarm optimization with transfer of the best particle called BMPSO. In the proposed algorithm, we introduce parasitism into the standard particle swarm algorithm (PSO) in order to balance exploration and exploitation, as well as enhancing the capacity for global search to solve nonlinear optimization problems. First, the best particle guides other particles to prevent them from being trapped by local optima. We provide a detailed description of BMPSO. We also present a diversity analysis of the proposed BMPSO, which is explained based on the Sphere function. Finally, we tested the performance of the proposed algorithm with six standard test functions and an engineering problem. Compared with some other algorithms, the results showed that the proposed BMPSO performed better when applied to the test functions and the engineering problem. Furthermore, the proposed BMPSO can be applied to other nonlinear optimization problems. PMID:26345200
NASA Astrophysics Data System (ADS)
Putti, M.; Camporese, M.; Pasetto, D.
2010-12-01
Data assimilation (DA) has recently received growing interest by the hydrological modeling community due to its capability to merge observations into model prediction. Among the many DA methods available, the Ensemble Kalman Filter (EnKF) and the Particle Filter (PF) are suitable alternatives for applications to detailed physically-based hydrological models. For each assimilation period, both methods use a Monte Carlo approach to approximate the state probability distribution (in terms of mean and covariance matrix) by a finite number of independent model trajectories, also called particles or realizations. The two approaches differ in the way the filtering distribution is evaluated. EnKF implements the classical Kalman filter, optimal only for linear dynamics and Gaussian error statistics. Particle filters, instead, use directly the recursive formula of the sequential Bayesian framework and approximate the posterior probability distributions by means of appropriate weights associated to each realization. We use the Sequential Importance Resampling (SIR) technique, which retains only the most probable particles, in practice the trajectories closest in a statistical sense to the observations, and duplicates them when needed. In contrast to EnKF, particle filters make no assumptions on the form of the prior distribution of the model state, and convergence to the true state is ensured for large enough ensemble size. In this study EnKF and PF have been implemented in a physically based catchment simulator that couples a three-dimensional finite element Richards equation solver with a finite difference diffusion wave approximation based on a digital elevation data for surface water dynamics. We report on the retrieval performance of the two schemes using a three-dimensional tilted v-catchment synthetic test case in which multi-source observations are assimilated (pressure head, soil moisture, and streamflow data). The comparison between the results of the two approaches
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
A Parameterized Design Framework for Hardware Implementation of Particle Filters
2008-03-01
explore differ- ent design options for implementing two different particle filtering applications on field-programmable gate arrays ( FPGAs ), and we present...associated results on trade-offs between area ( FPGA resource requirements) and execution speed. Index Terms — Field programmable gate arrays, Parallel...programmable gate arrays ( FPGAs ) is proposed to enable comprehensive design space exploration of the whole system with attention to the interaction
Chi-squared smoothed adaptive particle-filtering based prognosis
NASA Astrophysics Data System (ADS)
Ley, Christopher P.; Orchard, Marcos E.
2017-01-01
This paper presents a novel form of selecting the likelihood function of the standard sequential importance sampling/re-sampling particle filter (SIR-PF) with a combination of sliding window smoothing and chi-square statistic weighting, so as to: (a) increase the rate of convergence of a flexible state model with artificial evolution for online parameter learning (b) improve the performance of a particle-filter based prognosis algorithm. This is applied and tested with real data from oil total base number (TBN) measurements from three haul trucks. The oil data has high measurement uncertainty and an unknown phenomenological state model. Performance of the proposed algorithm is benchmarked against the standard form of SIR-PF estimation which utilises the Normal (Gaussian) likelihood function. Both implementations utilise the same particle filter based prognosis algorithm so as to provide a common comparison. A sensitivity analysis is also performed to further explore the effects of the combination of sliding window smoothing and chi-square statistic weighting to the SIR-PF.
Design of optimal correlation filters for hybrid vision systems
NASA Technical Reports Server (NTRS)
Rajan, Periasamy K.
1990-01-01
Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Gebremichael, M.; Hopson, T. M.; Wojick, R.
2011-12-01
We present results of data assimilation of ground discharge observation and remotely sensed soil moisture observations into Sacramento Soil Moisture Accounting (SACSMA) model in a small watershed (1593 km2) in Minnesota, the Unites States. Specifically, we perform assimilation experiments with Ensemble Kalman Filter (EnKF) and Particle Filter (PF) in order to improve streamflow forecast accuracy at six hourly time step. The EnKF updates the soil moisture states in the SACSMA from the relative errors of the model and observations, while the PF adjust the weights of the state ensemble members based on the likelihood of the forecast. Results of the improvements of each filter over the reference model (without data assimilation) will be presented. Finally, the EnKF and PF are coupled together to further improve the streamflow forecast accuracy.
Optimal Filtering Methods to Structural Damage Estimation under Ground Excitation
Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan
2013-01-01
This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869
Optimal Recursive Digital Filters for Active Bending Stabilization
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2013-01-01
In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.
Models of filter-based particle light absorption measurements
NASA Astrophysics Data System (ADS)
Hamasha, Khadeejeh M.
Light absorption by aerosol is very important in the visible, near UN, and near I.R region of the electromagnetic spectrum. Aerosol particles in the atmosphere have a great influence on the flux of solar energy, and also impact health in a negative sense when they are breathed into lungs. Aerosol absorption measurements are usually performed by filter-based methods that are derived from the change in light transmission through a filter where particles have been deposited. These methods suffer from interference between light-absorbing and light-scattering aerosol components. The Aethalometer is the most commonly used filter-based instrument for aerosol light absorption measurement. This dissertation describes new understanding of aerosol light absorption obtained by the filter method. The theory uses a multiple scattering model for the combination of filter and particle optics. The theory is evaluated using Aethalometer data from laboratory and ambient measurements in comparison with photoacoustic measurements of aerosol light absorption. Two models were developed to calculate aerosol light absorption coefficients from the Aethalometer data, and were compared to the in-situ aerosol light absorption coefficients. The first is an approximate model and the second is a "full" model. In the approximate model two extreme cases of aerosol optics were used to develop a model-based calibration scheme for the 7-wavelength Aethalometer. These cases include those of very strong scattering aerosols (Ammonium sulfate sample) and very absorbing aerosols (kerosene soot sample). The exponential behavior of light absorption in the strong multiple scattering limit is shown to be the square root of the total absorption optical depth rather than linear with optical depth as is commonly assumed with Beer's law. 2-stream radiative transfer theory was used to develop the full model to calculate the aerosol light absorption coefficients from the Aethalometer data. This comprehensive model
Single-channel noise reduction using optimal rectangular filtering matrices.
Long, Tao; Chen, Jingdong; Benesty, Jacob; Zhang, Zhenxi
2013-02-01
This paper studies the problem of single-channel noise reduction in the time domain and presents a block-based approach where a vector of the desired speech signal is recovered by filtering a frame of the noisy signal with a rectangular filtering matrix. With this formulation, the noise reduction problem becomes one of estimating an optimal filtering matrix. To achieve such estimation, a method is introduced to decompose a frame of the clean speech signal into two orthogonal components: One correlated and the other uncorrelated with the current desired speech vector to be estimated. Different optimization cost functions are then formulated from which non-causal optimal filtering matrices are derived. The relationships among these optimal filtering matrices are discussed. In comparison with the classical sample-based technique that uses only forward prediction, the block-based method presented in this paper exploits both the forward and backward prediction as well as the temporal interpolation and, therefore, can improve the noise reduction performance by fully taking advantage of the speech property of self correlation. There is also a side advantage of this block-based method as compared to the sample-based technique, i.e., it is computationally more efficient and, as a result, more suitable for practical implementation.
Optimization of filtering schemes for broadband astro-combs.
Chang, Guoqing; Li, Chih-Hao; Phillips, David F; Szentgyorgyi, Andrew; Walsworth, Ronald L; Kärtner, Franz X
2012-10-22
To realize a broadband, large-line-spacing astro-comb, suitable for wavelength calibration of astrophysical spectrographs, from a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three additional components: (1) one or more filter cavities to multiply the source-comb's repetition rate and thus line spacing; (2) power amplifiers to boost the power of pulses from the filtered comb; and (3) highly nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb. In this paper we analyze the interplay of Fabry-Perot (FP) filter cavities with power amplifiers and nonlinear broadening fiber in the design of astro-combs optimized for radial-velocity (RV) calibration accuracy. We present analytic and numeric models and use them to evaluate a variety of FP filtering schemes (labeled as identical, co-prime, fraction-prime, and conjugate cavities), coupled to chirped-pulse amplification (CPA). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase RV error for spectrograph calibration. In general, filtering with two cavities prior to the CPA fiber amplifier outperforms an amplifier placed between the two cavities. In particular, filtering with conjugate cavities is able to provide <1 cm/s RV calibration error with >300 nm wavelength coverage. Such superior performance will facilitate the search for and characterization of Earth-like exoplanets, which requires <10 cm/s RV calibration error.
Particle swarm optimization for complex nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos
2016-06-01
This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.
Selectively-informed particle swarm optimization
Gao, Yang; Du, Wenbo; Yan, Gang
2015-01-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315
Assessment of optimally filtered recent geodetic mean dynamic topographies
NASA Astrophysics Data System (ADS)
Siegismund, F.
2013-01-01