Particle swarm optimization-based approach for optical finite impulse response filter design
Wu, Shin-Tson
Particle swarm optimization-based approach for optical finite impulse response filter design Ying method for the design of an optical finite impulse response FIR filter by employing a particle swarm optimization technique. With the method proposed, the design of an optical FIR filter, which is able to provide
Hybridizing Particle Filters and Population-based Metaheuristics for Dynamic Optimization Problems
Pantrigo Fernández, Juan José
-reconstruction procedure [15]. On the other hand, many dynamic problems require the estimation of the system stateHybridizing Particle Filters and Population-based Metaheuristics for Dynamic Optimization Problems Many real-world optimization problems are dynamic. These problems require from powerful methods
Visual Tracking & Particle Filters
LeGland, François
-production (compositing, augmented reality, editing, re-purposing, stereo-3D authoring, motion capture for animation General case: sequential Monte Carlo approximation (particle filter) Pros: transports full distribution
A SIMULATION-BASED OPTIMIZATION APPROACH TO POLYMER EXTRUSION FILTER
Jenkins, Lea
A SIMULATION-BASED OPTIMIZATION APPROACH TO POLYMER EXTRUSION FILTER DESIGN K.R. Fowler1 S.M. La methods for finding optimal parameters for the filter such that its lifetime is maximized, while placing model that describes the deposition of debris particles in the filter. Optimization algorithms are used
Variational Particle Filter for Imperfect Models
NASA Astrophysics Data System (ADS)
Baehr, C.
2012-12-01
Whereas classical data processing techniques work with perfect models geophysical sciences have to deal with imperfect models with spatially structured errors. For the perfect model cases, in terms of Mean-Field Markovian processes, the optimal filter is known: the Kalman estimator is the answer to the linearGaussian problem and in the general case Particle approximations are the empirical solutions to the optimal estimator. We will present another way to decompose the Bayes rule, using an one step ahead observation. This method is well adapted to the strong nonlinear or chaotic systems. Then, in order to deal with imperfect model, we suggest in this presentation to learn the (large scale) model errors using a variational correction before the resampling step of the non-linear filtering. This procedure replace the a-priori Markovian transition by a kernel conditioned to the observations. This supplementary step may be read as the use of variational particles approximation. For the numerical applications, we have chosen to show the impact of our method, first on a simple marked Poisson process with Gaussian observation noises (the time-exponential jumps are considered as model errors) and then on a 2D shallow water experiment in a closed basin, with some falling droplets as model errors.; Marked Poisson process with Gaussian observation noise filtered by four methods: classical Kalman filter, genetic particle filter, trajectorial particle filter and Kalman-particle filter. All use only 10 particles. ; 2D Shallow Water simulation with droplets errors. Results of a classical 3DVAR and of our VarPF (10 particles).
Optimization of integrated polarization filters.
Gagnon, Denis; Dumont, Joey; Déziel, Jean-Luc; Dubé, Louis J
2014-10-01
This study reports on the design of small footprint, integrated polarization filters based on engineered photonic lattices. Using a rods-in-air lattice as a basis for a TE filter and a holes-in-slab lattice for the analogous TM filter, we are able to maximize the degree of polarization of the output beams up to 98% with a transmission efficiency greater than 75%. The proposed designs allow not only for logical polarization filtering, but can also be tailored to output an arbitrary transverse beam profile. The lattice configurations are found using a recently proposed parallel tabu search algorithm for combinatorial optimization problems in integrated photonics. PMID:25360980
Series expansions of Brownian motion and the unscented particle filter
Edinburgh, University of
Series expansions of Brownian motion and the unscented particle filter October 15, 2013 Abstract The discrete-time filtering problem for nonlinear diffusion processes is computationally intractable in general. For this reason, methods such as the bootstrap filter are particularly effective at approximating the optimal
Towards robust particle filters for high-dimensional systems
NASA Astrophysics Data System (ADS)
van Leeuwen, Peter Jan
2015-04-01
In recent years particle filters have matured and several variants are now available that are not degenerate for high-dimensional systems. Often they are based on ad-hoc combinations with Ensemble Kalman Filters. Unfortunately it is unclear what approximations are made when these hybrids are used. The proper way to derive particle filters for high-dimensional systems is exploring the freedom in the proposal density. It is well known that using an Ensemble Kalman Filter as proposal density (the so-called Weighted Ensemble Kalman Filter) does not work for high-dimensional systems. However, much better results are obtained when weak-constraint 4Dvar is used as proposal, leading to the implicit particle filter. Still this filter is degenerate when the number of independent observations is large. The Equivalent-Weights Particle Filter is a filter that works well in systems of arbitrary dimensions, but it contains a few tuning parameters that have to be chosen well to avoid biases. In this paper we discuss ways to derive more robust particle filters for high-dimensional systems. Using ideas from large-deviation theory and optimal transportation particle filters will be generated that are robust and work well in these systems. It will be shown that all successful filters can be derived from one general framework. Also, the performance of the filters will be tested on simple but high-dimensional systems, and, if time permits, on a high-dimensional highly nonlinear barotropic vorticity equation model.
Distributed SLAM Using Improved Particle Filter for Mobile Robot Localization
Pei, Fujun; Wu, Mei; Zhang, Simin
2014-01-01
The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness. PMID:24883362
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
BACTERIA-FILTERS: PERSISTENT PARTICLE FILTERS FOR BACKGROUND SUBTRACTION
Peleg, Shmuel
BACTERIA-FILTERS: PERSISTENT PARTICLE FILTERS FOR BACKGROUND SUBTRACTION Yair Movshovitz switch of bacteria between two states: A normal growing cell and a dormant but persistent cell after the stress is over, bacterial growth continues. Similar to bacteria, particles will switch between
System and Apparatus for Filtering Particles
NASA Technical Reports Server (NTRS)
Agui, Juan H. (Inventor); Vijayakumar, Rajagopal (Inventor)
2015-01-01
A modular pre-filtration apparatus may be beneficial to extend the life of a filter. The apparatus may include an impactor that can collect a first set of particles in the air, and a scroll filter that can collect a second set of particles in the air. A filter may follow the pre-filtration apparatus, thus causing the life of the filter to be increased.
Optimal compositions of soft morphological filters
NASA Astrophysics Data System (ADS)
Koivisto, Pertti T.; Huttunen, Heikki; Kuosmanen, Pauli
1995-03-01
Soft morphological filters form a large class of nonlinear filters with many desirable properties. However, few design methods exist for these filters and in the existing methods the selection of the filter composition tends to be ad-hoc and application specific. This paper demonstrates how optimization schemes, simulated annealing and genetic algorithms, can be employed in the search for optimal soft morphological filter sequences realizing optimal performance in a given signal processing task. This paper describes also the modifications in the optimization schemes required to obtain sufficient convergence.
Optimally (Distributional-)Robust Kalman Filtering
Ruckdeschel, Peter
Optimally (Distributional-)Robust Kalman Filtering Peter Ruckdeschel Peter Ruckdeschel Fraunhofer.Ruckdeschel@itwm.fraunhofer.de Abstract: We present optimality results for robust Kalman filtering where robustness is understood classifications: Primary 93E11; secondary 62F35. Keywords and phrases: robustness, Kalman Filter, innovation
Enhancing Particle Filters using Local Likelihood Sampling
Szepesvari, Csaba
Enhancing Particle Filters using Local Likelihood Sampling P´eter Torma1 and Csaba Szepesv´ari2 1. In this paper we propose a new two-stage sampling procedure to boost the performance of particle filters under weighted sample becomes a good representation of the new posterior that takes into account the new
Rickard Karlsson ISIS Particle Filtering in Practice
Schön, Thomas
Rickard Karlsson ISIS 2004-11-04 Particle Filtering in Practice Sensor fusion, Positioning and Tracking Rickard Karlsson Automatic Control Linköping University, SWEDEN rickard@isy.liu.se #12;Rickard Karlsson ISIS Linköping 2004-11-05 Particle Filtering within ISIS from my perspective #12;Rickard Karlsson
Particle Filters for Mobile Robot Localization
Teschner, Matthias
Particle Filters for Mobile Robot Localization Dieter Fox, Sebastian Thrun, Wolfram BurÂ gard of mobile robotics. In particular, we report results of applying particle filters to the problem of mobile robot localization, which is the problem of estimating a robot's pose relative to a map of its
Angle only tracking with particle flow filters
NASA Astrophysics Data System (ADS)
Daum, Fred; Huang, Jim
2011-09-01
We show the results of numerical experiments for tracking ballistic missiles using only angle measurements. We compare the performance of an extended Kalman filter with a new nonlinear filter using particle flow to compute Bayes' rule. For certain difficult geometries, the particle flow filter is an order of magnitude more accurate than the EKF. Angle only tracking is of interest in several different sensors; for example, passive optics and radars in which range and Doppler data are spoiled by jamming.
NASA Astrophysics Data System (ADS)
Plaza Guingla, Douglas A.; Keyser, Robin; Lannoy, GabriëLle J. M.; Giustarini, Laura; Matgen, Patrick; Pauwels, Valentijn R. N.
2013-07-01
The objective of this paper is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. The results indicate that the inclusion of the resample-move step in the standard particle filter and the use of an optimal importance density function in the Gaussian particle filter improve the effectiveness of particle filters. Moreover, an optimization of the forecast ensemble used in this study allowed for a better performance of the modified Gaussian particle filter compared to the particle filter with resample-move step.
Early maritime applications of particle filtering
NASA Astrophysics Data System (ADS)
Richardson, Henry R.; Stone, Lawrence D.; Monach, W. Reynolds; Discenza, Joseph H.
2004-01-01
This paper provides a brief history of some operational particle filters that were used by the U. S. Coast Guard and U. S. Navy. Starting in 1974 the Coast Guard system provided Search and Rescue Planning advice for objects lost at sea. The Navy systems were used to plan searches for Soviet submarines in the Atlantic, Pacific, and Mediterranean starting in 1972. The systems operated in a sequential, Bayesian manner. A prior distribution for the target"s location and movement was produced using both objective and subjective information. Based on this distribution, the search assets available, and their detection characteristics, a near-optimal search was planned. Typically, this involved visual searches by Coast Guard aircraft and sonobuoy searches by Navy antisubmarine warfare patrol aircraft. The searches were executed, and the feedback, both detections and lack of detections, was fed into a particle filter to produce the posterior distribution of the target"s location. This distribution was used as the prior for the next iteration of planning and search.
Early maritime applications of particle filtering
NASA Astrophysics Data System (ADS)
Richardson, Henry R.; Stone, Lawrence D.; Monach, W. Reynolds; Discenza, Joseph H.
2003-12-01
This paper provides a brief history of some operational particle filters that were used by the U. S. Coast Guard and U. S. Navy. Starting in 1974 the Coast Guard system provided Search and Rescue Planning advice for objects lost at sea. The Navy systems were used to plan searches for Soviet submarines in the Atlantic, Pacific, and Mediterranean starting in 1972. The systems operated in a sequential, Bayesian manner. A prior distribution for the target"s location and movement was produced using both objective and subjective information. Based on this distribution, the search assets available, and their detection characteristics, a near-optimal search was planned. Typically, this involved visual searches by Coast Guard aircraft and sonobuoy searches by Navy antisubmarine warfare patrol aircraft. The searches were executed, and the feedback, both detections and lack of detections, was fed into a particle filter to produce the posterior distribution of the target"s location. This distribution was used as the prior for the next iteration of planning and search.
OPTIMIZATION OF ADVANCED FILTER SYSTEMS
R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar
2002-06-30
Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program through cold flow and high-temperature testing. The Blasch, mullite-bonded alumina sheet filter element is the only candidate currently approaching qualification for demonstration, although this oxide-based, monolithic sheet filter element may be restricted to operating temperatures of 538 C (1000 F) or less. Many other types of ceramic and intermetallic sheet filter elements could be fabricated. The estimated capital cost of the sheet filter system is comparable to the capital cost of the standard candle filter system, although this cost estimate is very uncertain because the commercial price of sheet filter element manufacturing has not been established. The development of the sheet filter system could result in a higher reliability and availability than the standard candle filter system, but not as high as that of the inverted candle filter system. The sheet filter system has not reached the same level of development as the inverted candle filter system, and it will require more design development, filter element fabrication development, small-scale testing and evaluation before larger-scale testing could be recommended.
Optimal frequency domain textural edge detection filter
NASA Technical Reports Server (NTRS)
Townsend, J. K.; Shanmugan, K. S.; Frost, V. S.
1985-01-01
An optimal frequency domain textural edge detection filter is developed and its performance evaluated. For the given model and filter bandwidth, the filter maximizes the amount of output image energy placed within a specified resolution interval centered on the textural edge. Filter derivation is based on relating textural edge detection to tonal edge detection via the complex low-pass equivalent representation of narrowband bandpass signals and systems. The filter is specified in terms of the prolate spheroidal wave functions translated in frequency. Performance is evaluated using the asymptotic approximation version of the filter. This evaluation demonstrates satisfactory filter performance for ideal and nonideal textures. In addition, the filter can be adjusted to detect textural edges in noisy images at the expense of edge resolution.
Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer
2015-01-01
The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m?2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m?2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins.
Adaptive Mallow's optimization for weighted median filters
NASA Astrophysics Data System (ADS)
Rachuri, Raghu; Rao, Sathyanarayana S.
2002-05-01
This work extends the idea of spectral optimization for the design of Weighted Median filters and employ adaptive filtering that updates the coefficients of the FIR filter from which the weights of the median filters are derived. Mallows' theory of non-linear smoothers [1] has proven to be of great theoretical significance providing simple design guidelines for non-linear smoothers. It allows us to find a set of positive weights for a WM filter whose sample selection probabilities (SSP's) are as close as possible to a SSP set predetermined by Mallow's. Sample selection probabilities have been used as a basis for designing stack smoothers as they give a measure of the filter's detail preserving ability and give non-negative filter weights. We will extend this idea to design weighted median filters admitting negative weights. The new method first finds the linear FIR filter coefficients adaptively, which are then used to determine the weights of the median filter. WM filters can be designed to have band-pass, high-pass as well as low-pass frequency characteristics. Unlike the linear filters, however, the weighted median filters are robust in the presence of impulsive noise, as shown by the simulation results.
Particle filter parallelisation using random network based resampling
Frean, Marcus
Particle filter parallelisation using random network based resampling Praveen B. Choppala, Paul D Zealand Email: {praveen, pault, marcus}@ecs.vuw.ac.nz Abstract--The particle filter approximation the particle filter in parallel architectures. However, the resampling stage in the particle filter requires
Distance estimation using RSSI and particle filter.
Sve?ko, Janja; Malajner, Marko; Gleich, Dušan
2015-03-01
This paper presents a particle filter algorithm for distance estimation using multiple antennas on the receiver's side and only one transmitter, where a received signal strength indicator (RSSI) of radio frequency was used. Two different placements of antennas were considered (parallel and circular). The physical layer of IEEE standard 802.15.4 was used for communication between transmitter and receiver. The distance was estimated as the hidden state of a stochastic system and therefore a particle filter was implemented. The RSSI acquisitions were used for the computation of important weights within the particle filter algorithm. The weighted particles were re-sampled in order to ensure proper distribution and density. Log-normal and ground reflection propagation models were used for the modeling of a prior distribution within a Bayesian inference. PMID:25457044
Decomposition schemes with optimal soft morphological denoising filters
NASA Astrophysics Data System (ADS)
Koivisto, Pertti T.; Huttunen, Heikki; Kuosmanen, Pauli
1997-04-01
The filtering performance of the soft morphological filters in decomposition schemes is studied. Optimal soft morphological filters for the filtering of the decomposition bands are sought and their properties are analyzed. The performance and properties of the optimal filters found are compared to those of the corresponding optimal composite soft morphological filters. Also, the applicability of different decomposition methods, especially those related to soft morphological filters, is studied.
Testing particle filters on convective scale dynamics
NASA Astrophysics Data System (ADS)
Haslehner, Mylene; Craig, George. C.; Janjic, Tijana
2014-05-01
Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Würsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Würsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Würsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Würsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.
Particle Filtering Applied to Musical Tempo Tracking
2004-11-07
This paper explores the use of particle filters for beat tracking in musical audio examples. The aim is to estimate the time-varying tempo process and to find the time locations of beats, as defined by human perception. Two alternative algorithms...
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
NASA Technical Reports Server (NTRS)
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.
Optimal design of active EMC filters
NASA Astrophysics Data System (ADS)
Chand, B.; Kut, T.; Dickmann, S.
2013-07-01
A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.
A Uniformly Convergent Adaptive Particle Filter Anastasia Papavasiliou
Del Moral , Pierre
is asymptotically consistent and in addition, the optimal filter of the augmented system, i.e. the one where to compute the optimal filter. A common approach for dealing with unknown parameters in the system system, see [14]. In this paper, we discuss the problem of computing the optimal filter for the aug
Computationally efficient angles-only tracking with particle flow filters
NASA Astrophysics Data System (ADS)
Costa, Russell; Wettergren, Thomas A.
2015-05-01
Particle filters represent the current state of the art in nonlinear, non-Gaussian filtering. They are easy to implement and have been applied in numerous domains. That being said, particle filters can be impractical for problems with state dimensions greater than four, if some other problem specific efficiencies can't be identified. This "curse of dimensionality" makes particle filters a computationally burdensome approach, and the associated re-sampling makes parallel processing difficult. In the past several years an alternative to particle filters dubbed particle flows has emerged as a (potentially) much more efficient method to solving non-linear, non-Gaussian problems. Particle flow filtering (unlike particle filtering) is a deterministic approach, however, its implementation entails solving an under-determined system of partial differential equations which has infinitely many potential solutions. In this work we apply the filters to angles-only target motion analysis problems in order to quantify the (if any) computational gains over standard particle filtering approaches. In particular we focus on the simplest form of particle flow filter, known as the exact particle flow filter. This form assumes a Gaussian prior and likelihood function of the unknown target states and is then linearized as is standard practice for extended Kalman filters. We implement both particle filters and particle flows and perform numerous numerical experiments for comparison.
Training-based optimization of soft morphological filters
NASA Astrophysics Data System (ADS)
Koivisto, Pertti; Huttunen, Heikki; Kuosmanen, Pauli
1996-07-01
Soft morphological filters form a large class of nonlinear filters with many desirable properties. However, few design methods exist for these filters. This paper demonstrates how optimization schemes, simulated annealing and genetic algorithms, can be employed in the search for soft morphological filters having optimal performance in a given signal processing task. Furthermore, the properties of the achieved optimal soft morphological filters in different situations are analyzed.
Detecting Separations of Moving Objects for Particle Filter
NASA Astrophysics Data System (ADS)
Takechi, Keisuke; Kurahashi, Wataru; Fukui, Shinji; Iwahori, Yuji
This paper treats the case that a group of objects is tracked by a group of particles of the particle filter. When the object group separates, the particle filter may fail in tracking, or there may be objects which are not tracked by the filter. This paper proposes a new method for detecting separations of objects tracked by the particle filter. After the detection, a group of particles is rearranged to each object group so that all objects can be tracked by the particle filter. Results are demonstrated by experiments using real videos.
FIR Filter Design via Spectral Factorization and Convex Optimization 1 FIR Filter Design via UCSB 10 24 97 FIR Filter Design via Spectral Factorization and Convex Optimization 2 Outline Convex optimization & interior-point methods FIR lters & magnitude specs Spectral factorization Examples lowpass lter
Ensemble Particle Filter with Posterior Gaussian By X. Xiong1
Ensemble Particle Filter with Posterior Gaussian Resampling By X. Xiong1 and I. M. Navon1 1School March 2005 ABSTRACT An ensemble particle filter(EnPF) was recently developed as a fully nonlinear fil- ter of Bayesian conditional probability estimation, along with the well known ensemble Kalman filter
Optimal digital filtering for tremor suppression.
Gonzalez, J G; Heredia, E A; Rahman, T; Barner, K E; Arce, G R
2000-05-01
Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed. Ill-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinson's disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http:¿tremor-suppression.com. PMID:10851810
Optimal Estimation 5.3 State Space Kalman Filters
Nourbakhsh, Illah
Chapter 5 Optimal Estimation Part 3 5.3 State Space Kalman Filters Mobile Robotics - Prof Alonzo Kelly, CMU RI1 #12;Outline · 5.3 State Space Kalman Filters 5.3.1 Introduction 5.3.2 Linear Discrete Time Kalman Filter 5.3.3 Kalman Filters for Nonlinear Systems 5.3.4 Simple Example: 2D Mobile Robot
GNSS data filtering optimization for ionospheric observation
NASA Astrophysics Data System (ADS)
D'Angelo, G.; Spogli, L.; Cesaroni, C.; Sgrigna, V.; Alfonsi, L.; Aquino, M. H. O.
2015-12-01
In the last years, the use of GNSS (Global Navigation Satellite Systems) data has been gradually increasing, for both scientific studies and technological applications. High-rate GNSS data, able to generate and output 50-Hz phase and amplitude samples, are commonly used to study electron density irregularities within the ionosphere. Ionospheric irregularities may cause scintillations, which are rapid and random fluctuations of the phase and the amplitude of the received GNSS signals. For scintillation analysis, usually, GNSS signals observed at an elevation angle lower than an arbitrary threshold (usually 15°, 20° or 30°) are filtered out, to remove the possible error sources due to the local environment where the receiver is deployed. Indeed, the signal scattered by the environment surrounding the receiver could mimic ionospheric scintillation, because buildings, trees, etc. might create diffusion, diffraction and reflection. Although widely adopted, the elevation angle threshold has some downsides, as it may under or overestimate the actual impact of multipath due to local environment. Certainly, an incorrect selection of the field of view spanned by the GNSS antenna may lead to the misidentification of scintillation events at low elevation angles. With the aim to tackle the non-ionospheric effects induced by multipath at ground, in this paper we introduce a filtering technique, termed SOLIDIFY (Standalone OutLiers IDentIfication Filtering analYsis technique), aiming at excluding the multipath sources of non-ionospheric origin to improve the quality of the information obtained by the GNSS signal in a given site. SOLIDIFY is a statistical filtering technique based on the signal quality parameters measured by scintillation receivers. The technique is applied and optimized on the data acquired by a scintillation receiver located at the Istituto Nazionale di Geofisica e Vulcanologia, in Rome. The results of the exercise show that, in the considered case of a noisy site under quiet ionospheric conditions, the SOLIDIFY optimization maximizes the quality, instead of the quantity, of the data.
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Gebremichael, M.; LEE, H.; Hopson, T. M.
2012-12-01
Hydrologic data assimilation techniques provide a means to improve river discharge forecasts through updating hydrologic model states and correcting the atmospheric forcing data via optimally combining model outputs with observations. The performance of the assimilation procedure, however, depends on the data assimilation techniques used and the amount of uncertainty in the data sets. To investigate the effects of these, we comparatively evaluate three data assimilation techniques, including ensemble Kalman filter (EnKF), particle filter (PF) and variational (VAR) technique, which assimilate discharge and synthetic soil moisture data at various uncertainty levels into the Sacramento Soil Moisture accounting (SAC-SMA) model used by the National Weather Service (NWS) for river forecasting in The United States. The study basin is Greens Bayou watershed with area of 178 km2 in eastern Texas. In the presentation, we summarize the results of the comparisons, and discuss the challenges of applying each technique for hydrologic applications.
Optimal edge filters explain human blur detection.
McIlhagga, William H; May, Keith A
2012-01-01
Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur. PMID:22984222
Groupwise surface correspondence using particle filtering
NASA Astrophysics Data System (ADS)
Li, Guangxu; Kim, Hyoungseop; Tan, Joo Kooi; Ishikawa, Seiji
2015-03-01
To obtain an effective interpretation of organic shape using statistical shape models (SSMs), the correspondence of the landmarks through all the training samples is the most challenging part in model building. In this study, a coarse-tofine groupwise correspondence method for 3-D polygonal surfaces is proposed. We manipulate a reference model in advance. Then all the training samples are mapped to a unified spherical parameter space. According to the positions of landmarks of the reference model, the candidate regions for correspondence are chosen. Finally we refine the perceptually correct correspondences between landmarks using particle filter algorithm, where the likelihood of local surface features are introduced as the criterion. The proposed method was performed on the correspondence of 9 cases of left lung training samples. Experimental results show the proposed method is flexible and under-constrained.
Particle approximation of multiple object filtering problems P. Del Moral
Del Moral , Pierre
Particle association measures #12;Introduction/notation Defense Industrial Research project Some basic research project 1. Defense industrial Contract : ALEA INRIA & DCNS Toulon (2009) 2. National ResearchParticle approximation of multiple object filtering problems P. Del Moral UNSW, School
Detection with particle filtering in BLAST systems Yufei Huang
of particle filtering for detection in BLAST systems. A novel dynamic state-space model (DSSM) is constructed the possibility of constructing a dynamic state space model (DSSM) for BLAST systems. It is based on QRDetection with particle filtering in BLAST systems Yufei Huang Department of Electrical Engineering
Rao-Blackwellized Particle Filter for Multiple Target Tracking
Kaski, Samuel
Rao-Blackwellized Particle Filter for Multiple Target Tracking Simo S¨arkk¨a , Aki Vehtari, Jouko-Blackwellization. Key words: multiple target tracking, data association, unknown number of targets, Rao], in which we proposed a Rao-Blackwellized particle filtering based multiple target tracking algorithm called
A Boosted Particle Filter: Multitarget Detection and Tracking
Freitas, Nando de
particle filter [17] is ideally suited to multi-target tracking as it assigns a mixture component to each and fully automatic multiple object tracking system. 1 Introduction Automated tracking of multiple objects-Gaussianity. #12;Various researchers have attempted to extend particle filters to multi-target tracking. Among
Evolutionary Gabor Filter Optimization with Application to Vehicle Detection
Bebis, George
1 Evolutionary Gabor Filter Optimization with Application to Vehicle Detection Zehang Sun1 , George of Gabor filters in pattern classification, their design and selection have been mostly done on a trial and error basis. Existing techniques are either only suitable for a small number of filters or less problem
Optimization of photon correlations by frequency filtering
NASA Astrophysics Data System (ADS)
González-Tudela, Alejandro; del Valle, Elena; Laussy, Fabrice P.
2015-04-01
Photon correlations are a cornerstone of quantum optics. Recent works [E. del Valle, New J. Phys. 15, 025019 (2013), 10.1088/1367-2630/15/2/025019; A. Gonzalez-Tudela et al., New J. Phys. 15, 033036 (2013), 10.1088/1367-2630/15/3/033036; C. Sanchez Muñoz et al., Phys. Rev. A 90, 052111 (2014), 10.1103/PhysRevA.90.052111] have shown that by keeping track of the frequency of the photons, rich landscapes of correlations are revealed. Stronger correlations are usually found where the system emission is weak. Here, we characterize both the strength and signal of such correlations, through the introduction of the "frequency-resolved Mandel parameter." We study a plethora of nonlinear quantum systems, showing how one can substantially optimize correlations by combining parameters such as pumping, filtering windows and time delay.
Optimal filtering of the LISA data
Andrzej Krolak; Massimo Tinto; Michele Vallisneri
2007-07-19
The LISA time-delay-interferometry responses to a gravitational-wave signal are rewritten in a form that accounts for the motion of the LISA constellation around the Sun; the responses are given in closed analytic forms valid for any frequency in the band accessible to LISA. We then present a complete procedure, based on the principle of maximum likelihood, to search for stellar-mass binary systems in the LISA data. We define the required optimal filters, the amplitude-maximized detection statistic (analogous to the F statistic used in pulsar searches with ground-based interferometers), and discuss the false-alarm and detection probabilities. We test the procedure in numerical simulations of gravitational-wave detection.
Multispectral image denoising with optimized vector bilateral filter.
Peng, Honghong; Rao, Raghuveer; Dianat, Sohail A
2014-01-01
Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios (SNRs). Typical vector bilateral filtering described in the literature does not use parameters satisfying optimality criteria. We introduce an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimization of the Stein's unbiased risk estimate of this nonlinear estimator. Along the way, we provide a plausibility argument through an analytical example as to why vector bilateral filtering outperforms bandwise 2D bilateral filtering in enhancing SNR. Experimental results show that the optimized vector bilateral filter provides improved denoising performance on multispectral images when compared with several other approaches. PMID:24184727
Application of particle filtering algorithm in image reconstruction of EMT
NASA Astrophysics Data System (ADS)
Wang, Jingwen; Wang, Xu
2015-07-01
To improve the image quality of electromagnetic tomography (EMT), a new image reconstruction method of EMT based on a particle filtering algorithm is presented. Firstly, the principle of image reconstruction of EMT is analyzed. Then the search process for the optimal solution for image reconstruction of EMT is described as a system state estimation process, and the state space model is established. Secondly, to obtain the minimum variance estimation of image reconstruction, the optimal weights of random samples obtained from the state space are calculated from the measured information. Finally, simulation experiments with five different flow regimes are performed. The experimental results have shown that the average image error of reconstruction results obtained by the method mentioned in this paper is 42.61%, and the average correlation coefficient with the original image is 0.8706, which are much better than corresponding indicators obtained by LBP, Landweber and Kalman Filter algorithms. So, this EMT image reconstruction method has high efficiency and accuracy, and provides a new method and means for EMT research.
Blended particle filters for large-dimensional chaotic dynamical systems.
Majda, Andrew J; Qi, Di; Sapsis, Themistoklis P
2014-05-27
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below. PMID:24825886
Human-Manipulator Interface Using Particle Filter
Wang, Xueqian
2014-01-01
This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator. PMID:24757430
NASA Astrophysics Data System (ADS)
Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Kim, S.
2011-10-01
Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC) methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC) methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP), is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF) and the sequential importance resampling (SIR) particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
Some issues and results on the EnKF and particle filters for meteorological models
Baehr, Christophe
Some issues and results on the EnKF and particle filters for meteorological models Chaos 2009KF and particle filters for meteorological models #12;The nonlinear filtering problem Particle Filter resolution C. Baehr & O. Pannekoucke EnKF and particle filters for meteorological models #12;2 / 26 Nonlinear
Gao, Shuang; Kim, Jinyong; Yermakov, Michael; Elmashae, Yousef; He, Xinjian; Reponen, Tiina; Grinshpun, Sergey A
2015-01-01
Filtering facepiece respirators (FFRs) are commonly worn by first responders, first receivers, and other exposed groups to protect against exposure to airborne particles, including those originated by combustion. Most of these FFRs are NIOSH-certified (e.g., N95-type) based on the performance testing of their filters against charge-equilibrated aerosol challenges, e.g., NaCl. However, it has not been examined if the filtration data obtained with the NaCl-challenged FFR filters adequately represent the protection against real aerosol hazards such as combustion particles. A filter sample of N95 FFR mounted on a specially designed holder was challenged with NaCl particles and three combustion aerosols generated in a test chamber by burning wood, paper, and plastic. The concentrations upstream (Cup) and downstream (Cdown) of the filter were measured with a TSI P-Trak condensation particle counter and a Grimm Nanocheck particle spectrometer. Penetration was determined as (Cdown/Cup) ×100%. Four test conditions were chosen to represent inhalation flows of 15, 30, 55, and 85 L/min. Results showed that the penetration values of combustion particles were significantly higher than those of the "model" NaCl particles (p < 0.05), raising a concern about applicability of the N95 filters performance obtained with the NaCl aerosol challenge to protection against combustion particles. Aerosol type, inhalation flow rate and particle size were significant (p < 0.05) factors affecting the performance of the N95 FFR filter. In contrast to N95 filters, the penetration of combustion particles through R95 and P95 FFR filters (were tested in addition to N95) were not significantly higher than that obtained with NaCl particles. The findings were attributed to several effects, including the degradation of an N95 filter due to hydrophobic organic components generated into the air by combustion. Their interaction with fibers is anticipated to be similar to those involving "oily" particles. The findings of this study suggest that the efficiency of N95 respirator filters obtained with the NaCl aerosol challenge may not accurately predict (and rather overestimate) the filter efficiency against combustion particles. PMID:26010982
Method of concurrently filtering particles and collecting gases
Mitchell, Mark A; Meike, Annemarie; Anderson, Brian L
2015-04-28
A system for concurrently filtering particles and collecting gases. Materials are be added (e.g., via coating the ceramic substrate, use of loose powder(s), or other means) to a HEPA filter (ceramic, metal, or otherwise) to collect gases (e.g., radioactive gases such as iodine). The gases could be radioactive, hazardous, or valuable gases.
Ballistic target tracking algorithm based on improved particle filtering
NASA Astrophysics Data System (ADS)
Ning, Xiao-lei; Chen, Zhan-qi; Li, Xiao-yang
2015-10-01
Tracking ballistic re-entry target is a typical nonlinear filtering problem. In order to track the ballistic re-entry target in the nonlinear and non-Gaussian complex environment, a novel chaos map particle filter (CMPF) is used to estimate the target state. CMPF has better performance in application to estimate the state and parameter of nonlinear and non-Gassuian system. The Monte Carlo simulation results show that, this method can effectively solve particle degeneracy and particle impoverishment problem by improving the efficiency of particle sampling to obtain the better particles to part in estimation. Meanwhile CMPF can improve the state estimation precision and convergence velocity compared with EKF, UKF and the ordinary particle filter.
Body Part Tracking with Random Forests and Particle Filters
Freitas, Nando de
dependences. Most object tracking research utilizes particle filters, also known as Sequential Monte Carlo with your hand is incredibly frustrating if it jumps around while your hand is moving smoothly along an arch
Particle filtering with Lagrangian data in a point vortex model
Mitra, Subhadeep
2012-01-01
Particle filtering is a technique used for state estimation from noisy measurements. In fluid dynamics, a popular problem called Lagrangian data assimilation (LaDA) uses Lagrangian measurements in the form of tracer positions ...
An optimal blind temporal motion blur deconvolution filter Yohann Tendero
Ferguson, Thomas S.
An optimal blind temporal motion blur deconvolution filter Yohann Tendero and Jean Michel Morel filter restoring blindly any nonuniform motion blur with an amplitude below one pixel per frame: further examples and C++ implementation are available at http: // www. math. ucla. edu/ ~ tendero/ blind
Optimally Robust Kalman Filtering at Work: AO-, IO-, and Simultaneously IO-and AO-Robust Filters
Ruckdeschel, Peter
Optimally Robust Kalman Filtering at Work: AO-, IO-, and Simultaneously IO- and AO- Robust Filters Abstract We take up optimality results for robust Kalman filtering from Ruckdeschel (2001, 2010) where. (2006), Fried et al. (2007). Keywords: robustness, Kalman Filter, innovation outlier, additive outlier
Optimal filter systems for photometric redshift estimation
N. Benitez; M. Moles; J. A. L. Aguerri; E. Alfaro; T. Broadhurst; J. Cabrera; F. J. Castander; J. Cepa; M. Cervino; D. Cristobal-Hornillos; A. Fernandez-Soto; R. M. Gonzalez-Delgado; L. Infante; I. Marquez; V. J. Martinez; J. Masegosa; A. Del Olmo; J. Perea; F. Prada; J. M. Quintana; S. F. Sanchez
2008-12-18
In the next years, several cosmological surveys will rely on imaging data to estimate the redshift of galaxies, using traditional filter systems with 4-5 optical broad bands; narrower filters improve the spectral resolution, but strongly reduce the total system throughput. We explore how photometric redshift performance depends on the number of filters n_f, characterizing the survey depth through the fraction of galaxies with unambiguous redshift estimates. For a combination of total exposure time and telescope imaging area of 270 hrs m^2, 4-5 filter systems perform significantly worse, both in completeness depth and precision, than systems with n_f >= 8 filters. Our results suggest that for low n_f, the color-redshift degeneracies overwhelm the improvements in photometric depth, and that even at higher n_f, the effective photometric redshift depth decreases much more slowly with filter width than naively expected from the reduction in S/N. Adding near-IR observations improves the performance of low n_f systems, but still the system which maximizes the photometric redshift completeness is formed by 9 filters with logarithmically increasing bandwidth (constant resolution) and half-band overlap, reaching ~0.7 mag deeper, with 10% better redshift precision, than 4-5 filter systems. A system with 20 constant-width, non-overlapping filters reaches only ~0.1 mag shallower than 4-5 filter systems, but has a precision almost 3 times better, dz = 0.014(1+z) vs. dz = 0.042(1+z). We briefly discuss a practical implementation of such a photometric system: the ALHAMBRA survey.
Optimal filter bandwidth for pulse oximetry
NASA Astrophysics Data System (ADS)
Stuban, Norbert; Niwayama, Masatsugu
2012-10-01
Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.
On the Distance to Optimality of the Geometric Approximate Minimum-Energy Attitude Filter
Trumpf, Jochen
On the Distance to Optimality of the Geometric Approximate Minimum-Energy Attitude Filter Mohammad-optimality of the recent geometric approximate minimum-energy (GAME) filter, an attitude filter for estimation on the rotation group SO(3). The GAME filter approximates the minimum-energy (optimal) filtering solution
COMPUTATIONS ON THE PERFORMANCE OF PARTICLE FILTERS AND ELECTRONIC AIR CLEANERS
The paper discusses computations on the performance of particle filters and electronic air cleaners (EACs). The collection efficiency of particle filters and ACs is calculable if certain factors can be assumed or calibrated. For fibrous particulate filters, measurement of colle...
Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design
Troncoso Romero, David Ernesto
2014-01-01
Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674
NASA Astrophysics Data System (ADS)
Sambaer, Wannes; Zatloukal, Martin; Kimmer, Dusan
2013-04-01
Realistic SEM image based 3D filter model considering transition/free molecular flow regime, Brownian diffusion, aerodynamic slip, particle-fiber and particle-particle interactions together with a novel Euclidian distance map based methodology for the pressure drop calculation has been utilized for a polyurethane nanofiber based filter prepared via electrospinning process in order to more deeply understand the effect of particle-fiber friction coefficient on filter clogging and basic filter characteristics. Based on the performed theoretical analysis, it has been revealed that the increase in the fiber-particle friction coefficient causes, firstly, more weaker particle penetration in the filter, creation of dense top layers and generation of higher pressure drop (surface filtration) in comparison with lower particle-fiber friction coefficient filter for which deeper particle penetration takes place (depth filtration), secondly, higher filtration efficiency, thirdly, higher quality factor and finally, higher quality factor sensitivity to the increased collected particle mass. Moreover, it has been revealed that even if the particle-fiber friction coefficient is different, the cake morphology is very similar.
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Sequential Bearings-Only-Tracking Initiation with Particle Filtering Method
Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation. PMID:24453865
Sequential bearings-only-tracking initiation with particle filtering method.
Liu, Bin; Hao, Chengpeng
2013-01-01
The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation. PMID:24453865
Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter
Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.
2012-05-24
To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.
Design of optimal correlation filters for hybrid vision systems
NASA Technical Reports Server (NTRS)
Rajan, Periasamy K.
1990-01-01
Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.
Optimal Filtering Methods to Structural Damage Estimation under Ground Excitation
Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan
2013-01-01
This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869
Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering
NASA Astrophysics Data System (ADS)
Bruno, Marcelo G. S.; Dias, Stiven S.
2014-12-01
We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.
Optimal Recursive Digital Filters for Active Bending Stabilization
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2013-01-01
In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.
Nonlinear Statistical Signal Processing: A Particle Filtering Approach
Candy, J
2007-09-19
A introduction to particle filtering is discussed starting with an overview of Bayesian inference from batch to sequential processors. Once the evolving Bayesian paradigm is established, simulation-based methods using sampling theory and Monte Carlo realizations are discussed. Here the usual limitations of nonlinear approximations and non-gaussian processes prevalent in classical nonlinear processing algorithms (e.g. Kalman filters) are no longer a restriction to perform Bayesian inference. It is shown how the underlying hidden or state variables are easily assimilated into this Bayesian construct. Importance sampling methods are then discussed and shown how they can be extended to sequential solutions implemented using Markovian state-space models as a natural evolution. With this in mind, the idea of a particle filter, which is a discrete representation of a probability distribution, is developed and shown how it can be implemented using sequential importance sampling/resampling methods. Finally, an application is briefly discussed comparing the performance of the particle filter designs with classical nonlinear filter implementations.
Particle Methods for Filtering & Uncertainty Propagations P. Del Moral
Del Moral , Pierre
Bayesian statistics · Xt:=Signal=Stochastic process Engineering/physics/biology/economics : · Non observations of the signal Xt : Engineering/physics/biology/economics : · Engineering : Radar, Sonar, GPS). · Posterior Law=Law(X|Y ) (A Priori=Law(X)). 4 / 30 #12;Particle filters = Genetic type population of N
Localization using omnivision-based manifold particle filters
NASA Astrophysics Data System (ADS)
Wong, Adelia; Yousefhussien, Mohammed; Ptucha, Raymond
2015-01-01
Developing precise and low-cost spatial localization algorithms is an essential component for autonomous navigation systems. Data collection must be of sufficient detail to distinguish unique locations, yet coarse enough to enable real-time processing. Active proximity sensors such as sonar and rangefinders have been used for interior localization, but sonar sensors are generally coarse and rangefinders are generally expensive. Passive sensors such as video cameras are low cost and feature-rich, but suffer from high dimensions and excessive bandwidth. This paper presents a novel approach to indoor localization using a low cost video camera and spherical mirror. Omnidirectional captured images undergo normalization and unwarping to a canonical representation more suitable for processing. Training images along with indoor maps are fed into a semi-supervised linear extension of graph embedding manifold learning algorithm to learn a low dimensional surface which represents the interior of a building. The manifold surface descriptor is used as a semantic signature for particle filter localization. Test frames are conditioned, mapped to a low dimensional surface, and then localized via an adaptive particle filter algorithm. These particles are temporally filtered for the final localization estimate. The proposed method, termed omnivision-based manifold particle filters, reduces convergence lag and increases overall efficiency.
Model Adaptation for Prognostics in a Particle Filtering Framework
NASA Technical Reports Server (NTRS)
Saha, Bhaskar; Goebel, Kai Frank
2011-01-01
One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.
Multi-robot Simultaneous Localization and Mapping using Particle Filters
Del Moral , Pierre
Multi-robot Simultaneous Localization and Mapping using Particle Filters Andrew Howard NASA Jet Propulsion Laboratory Pasadena, California 91109, U.S.A. Email: abhoward@robotics.jpl.nasa.gov Abstract-- This paper describes an on-line algorithm for multi- robot simultaneous localization and mapping (SLAM). We
Optimization of filtering schemes for broadband astro-combs.
Chang, Guoqing; Li, Chih-Hao; Phillips, David F; Szentgyorgyi, Andrew; Walsworth, Ronald L; Kärtner, Franz X
2012-10-22
To realize a broadband, large-line-spacing astro-comb, suitable for wavelength calibration of astrophysical spectrographs, from a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three additional components: (1) one or more filter cavities to multiply the source-comb's repetition rate and thus line spacing; (2) power amplifiers to boost the power of pulses from the filtered comb; and (3) highly nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb. In this paper we analyze the interplay of Fabry-Perot (FP) filter cavities with power amplifiers and nonlinear broadening fiber in the design of astro-combs optimized for radial-velocity (RV) calibration accuracy. We present analytic and numeric models and use them to evaluate a variety of FP filtering schemes (labeled as identical, co-prime, fraction-prime, and conjugate cavities), coupled to chirped-pulse amplification (CPA). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase RV error for spectrograph calibration. In general, filtering with two cavities prior to the CPA fiber amplifier outperforms an amplifier placed between the two cavities. In particular, filtering with conjugate cavities is able to provide <1 cm/s RV calibration error with >300 nm wavelength coverage. Such superior performance will facilitate the search for and characterization of Earth-like exoplanets, which requires <10 cm/s RV calibration error. PMID:23187265
Optimal fractional delay-IIR filter design using cuckoo search algorithm.
Kumar, Manjeet; Rawat, Tarun Kumar
2015-11-01
This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486
Multiswarm Particle Swarm Optimization with Transfer of the Best Particle
Wei, Xiao-peng; Zhang, Jian-xia; Zhou, Dong-sheng; Zhang, Qiang
2015-01-01
We propose an improved algorithm, for a multiswarm particle swarm optimization with transfer of the best particle called BMPSO. In the proposed algorithm, we introduce parasitism into the standard particle swarm algorithm (PSO) in order to balance exploration and exploitation, as well as enhancing the capacity for global search to solve nonlinear optimization problems. First, the best particle guides other particles to prevent them from being trapped by local optima. We provide a detailed description of BMPSO. We also present a diversity analysis of the proposed BMPSO, which is explained based on the Sphere function. Finally, we tested the performance of the proposed algorithm with six standard test functions and an engineering problem. Compared with some other algorithms, the results showed that the proposed BMPSO performed better when applied to the test functions and the engineering problem. Furthermore, the proposed BMPSO can be applied to other nonlinear optimization problems. PMID:26345200
Optimal filtering in multipulse sequences for nuclear quadrupole resonance detection
NASA Astrophysics Data System (ADS)
Osokin, D. Ya.; Khusnutdinov, R. R.; Mozzhukhin, G. V.; Rameev, B. Z.
2014-05-01
The application of the multipulse sequences in nuclear quadrupole resonance (NQR) detection of explosive and narcotic substances has been studied. Various approaches to increase the signal to noise ratio (SNR) of signal detection are considered. We discussed two modifications of the phase-alternated multiple-pulse sequence (PAMS): the 180° pulse sequence with a preparatory pulse and the 90° pulse sequence. The advantages of optimal filtering to detect NQR in the case of the coherent steady-state precession have been analyzed. It has been shown that this technique is effective in filtering high-frequency and low-frequency noise and increasing the reliability of NQR detection. Our analysis also shows the PAMS with 180° pulses is more effective than PSL sequence from point of view of the application of optimal filtering procedure to the steady-state NQR signal.
Optimal Correlation Filters for Images with Signal-Dependent Noise
NASA Technical Reports Server (NTRS)
Downie, John D.; Walkup, John F.
1994-01-01
We address the design of optimal correlation filters for pattern detection and recognition in the presence of signal-dependent image noise sources. The particular examples considered are film-grain noise and speckle. Two basic approaches are investigated: (1) deriving the optimal matched filters for the signal-dependent noise models and comparing their performances with those derived for traditional signal-independent noise models and (2) first nonlinearly transforming the signal-dependent noise to signal-independent noise followed by the use of a classical filter matched to the transformed signal. We present both theoretical and computer simulation results that demonstrate the generally superior performance of the second approach in terms of the correlation peak signal-to-noise ratio.
Transmit filter design methods for magnetic particle imaging
NASA Astrophysics Data System (ADS)
Zheng, Bo; Goodwill, Patrick; Conolly, Steven
2011-03-01
Magnetic particle imaging (MPI) has emerged as a new imaging modality that uses the nonlinear magnetization behavior of superparamagnetic particles. Due to the need to avoid contamination of particle signals with the simultaneous excitation signal, MPI transmit systems require different design considerations from those in MRI, where excitation and detection are temporally decoupled. Specifically, higher order harmonic distortion in the transmit spectrum can feed through to and contaminate the received signal spectrum. In a prototype MPI scanner, this distortion needs to be attenuated by 90 dB at all frequencies. In this paper, we describe two methods of filtering out harmonic distortion in the transmit spectrum. The first method uses a Butterworth topology while the second a cascaded Butterworth-elliptic topology. We show that whereas the Butterworth filter alone achieves around 16 and 32 dB attenuation at the second and third harmonics, the cascaded filter can achieve around 65 and 73 dB at these harmonics. Finally, we discuss how notch placement in the stopband can also be applied to design highpass filters for MPI detection systems.
Na-Faraday rotation filtering: The optimal point
Kiefer, Wilhelm; Löw, Robert; Wrachtrup, Jörg; Gerhardt, Ilja
2014-01-01
Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251
PEOPLE TRACKING WITH A MOBILE ROBOT: A COMPARISON OF KALMAN AND PARTICLE FILTERS
Hu, Huosheng
PEOPLE TRACKING WITH A MOBILE ROBOT: A COMPARISON OF KALMAN AND PARTICLE FILTERS Nicola Bellotto, Mobile Robot, Kalman Filter, Particle Filter, Multisensor Fusion. 1 Introduction People tracking has. In this paper we compare three different Bayesian estimators to perform such task: Extended Kalman Filter (EKF
Ravindran, Binoy
sensor network; target tracking; Bayesian estimation; neighborhood estimation I. INTRODUCTION WirelessCompletely Distributed Particle Filters for Target Tracking in Sensor Networks Bo Jiang, Binoy distributed particle filter (or CDPF) for target tracking in sensor networks, and further improve
Degeneracy, frequency response and filtering in IMRT optimization.
Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D; Promberger, Claus
2004-07-01
This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques. PMID:15285252
Optimal color image restoration: Wiener filter and quaternion Fourier transform
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; Agaian, Sos S.
2015-03-01
In this paper, we consider the model of quaternion signal degradation when the signal is convoluted and an additive noise is added. The classical model of such a model leads to the solution of the optimal Wiener filter, where the optimality with respect to the mean square error. The characteristic of this filter can be found in the frequency domain by using the Fourier transform. For quaternion signals, the inverse problem is complicated by the fact that the quaternion arithmetic is not commutative. The quaternion Fourier transform does not map the convolution to the operation of multiplication. In this paper, we analyze the linear model of the signal and image degradation with an additive independent noise and the optimal filtration of the signal and images in the frequency domain and in the quaternion space.
Fuzzy membership function optimization for system identification using an extended Kalman filter
Simon, Dan
Fuzzy membership function optimization for system identification using an extended Kalman filter an extended Kalman filter to optimize the membership functions for system modeling, or system identification is that the proposed system acts as a noise-reducing filter. We demonstrate that the extended Kalman filter can
Measurement of particle sulfate from micro-aethalometer filters
NASA Astrophysics Data System (ADS)
Wang, Qingqing; Yang, Fumo; Wei, Lianfang; Zheng, Guangjie; Fan, Zhongjie; Rajagopalan, Sanjay; Brook, Robert D.; Duan, Fengkui; He, Kebin; Sun, Yele; Brook, Jeffrey R.
2014-10-01
The micro-aethalometer (AE51) was designed for high time resolution black carbon (BC) measurements and the process collects particles on a filter inside the instrument. Here we examine the potential for saving these filters for subsequent sulfate (SO42-) measurement. For this purpose, a series lab and field blanks were analyzed to characterize blank levels and variability and then collocated 24-h aerosol sampling was conducted in Beijing with the AE51 and a dual-channel filterpack sampler that collects fine particles (PM2.5). AE51 filters and the filters from the filterpacks sampled for 24 h were extracted with ultrapure water and then analyzed by Ion Chromatography (IC) to determine integrated SO42- concentration. Blank corrections were essential and the estimated detection limit for 24 h AE51 sampling of SO42- was estimated to be 1.4 ?g/m3. The SO42- measured from the AE51 based upon blank corrections using batch-average field blank SO42- values was found to be in reasonable agreement with the filterpack results (R2 > 0.87, slope = 1.02) indicating that it is possible to determine both BC and SO42- concentrations using the AE51 in Beijing. This result suggests that future comparison of the relative health impacts of BC and SO42- could be possible when the AE51 is used for personal exposure measurement.
Indoor occupant positioning system using active RFID deployment and particle filters
Indoor occupant positioning system using active RFID deployment and particle filters Kevin Weekly on the Sampling Importance Resampling (SIR) particle filtering algorithm. To use particle filtering methods as researchers discover location-aware services such as on-demand lighting or ventilation control. However, due
Merging particle filter for high-dimensional nonlinear problems S. Nakano, G. Ueno, and T. Higuchi
Nakano, Shin'ya
Merging particle filter for high-dimensional nonlinear problems S. Nakano, G. Ueno, and T. Higuchi, the merging particle filter (MPF). In the MPF, a filtering procedure is performed by merging several particles of a prior ensemble, which is rather similar to the genetic algorithm (e.g., Goldberg, 1989). This merging
Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe
2014-01-01
Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30?Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30?Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967
Boudet, Samuel; Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe
2014-01-01
Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Gebremichael, M.; Hopson, T. M.; Wojick, R.
2011-12-01
We present results of data assimilation of ground discharge observation and remotely sensed soil moisture observations into Sacramento Soil Moisture Accounting (SACSMA) model in a small watershed (1593 km2) in Minnesota, the Unites States. Specifically, we perform assimilation experiments with Ensemble Kalman Filter (EnKF) and Particle Filter (PF) in order to improve streamflow forecast accuracy at six hourly time step. The EnKF updates the soil moisture states in the SACSMA from the relative errors of the model and observations, while the PF adjust the weights of the state ensemble members based on the likelihood of the forecast. Results of the improvements of each filter over the reference model (without data assimilation) will be presented. Finally, the EnKF and PF are coupled together to further improve the streamflow forecast accuracy.
Filtering of windborne particles by a natural windbreak
NASA Astrophysics Data System (ADS)
Bouvet, Thomas; Loubet, Benjamin; Wilson, John D.; Tuzet, Andree
2007-06-01
New measurements of the transport and deposition of artificial heavy particles (glass beads) to a thick ‘shelterbelt’ of maize (width/height ratio W/ H ? 1.6) are used to test numerical simulations with a Lagrangian stochastic trajectory model driven by the flow field from a RANS (Reynolds-averaged, Navier-Stokes) wind and turbulence model. We illustrate the ambiguity inherent in applying to such a thick windbreak the pre-existing (Raupach et al. 2001; Atmos. Environ. 35, 3373-3383) ‘thin windbreak’ theory of particle filtering by vegetation, and show that the present description, while much more laborious, provides a reasonably satisfactory account of what was measured. A sizeable fraction of the particle flux entering the shelterbelt across its upstream face is lifted out of its volume by the mean updraft induced by the deceleration of the flow in the near-upstream and entry region, and these particles thereby escape deposition in the windbreak.
Selectively-informed particle swarm optimization
NASA Astrophysics Data System (ADS)
Gao, Yang; Du, Wenbo; Yan, Gang
2015-03-01
Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors.
Improving the LPJ-GUESS modelled carbon balance with a particle filter data assimilation technique
NASA Astrophysics Data System (ADS)
McRobert, Andrew; Scholze, Marko; Kemp, Sarah; Smith, Ben
2015-04-01
The recent increases in anthropogenic carbon dioxide (CO_2) emissions have disrupted the equilibrium in the global carbon cycle pools with the ocean and terrestrial pools increasing their respective storages to accommodate roughly half of the anthropogenic increase. Dynamic global vegetation models (DGVM) have been developed to quantify the modern carbon cycle changes. In this study, a particle filter data assimilation technique has been used to calibrate the process parameters in the DGVM LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator). LPJ-GUESS simulates individual plant function types (pft) as a competitive balance within high resolution forest patches. Thirty process parameters have been optimized twice, using both a sequential and iterative method of particle filter. The iterative method runs the model for the full time period of thirteen years and then evaluates the cost function from the mismatch of observations and model results before adjusting the parameters and repeating the full time period. The sequential method runs the model and particle filter for each year of the time series in order, adjusting the parameters between each year, then loops back to beginning of the series to repeat. For each particle, the model output of NEP (Net Ecosystem Productivity) is compared to eddy flux measurements from ICOS flux towers to minimize the cost function. A high-resolution regional carbon balance has been simulated for central Sweden using a network of several ICOS flux towers.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2010-12-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2011-05-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
Hierarchical Linear/Constant Time SLAM Using Particle Filters for Dense Maps
Parr, Ronald
.duke.edu Abstract We present an improvement to the DP-SLAM algorithm for simultane- ous localization and mapping a hierarchical extension of DP-SLAM that uses a two level particle filter which models drift in the particle from using a finite number of particles in a particle filter and permits the use of DP-SLAM in more
The new approach for infrared target tracking based on the particle filter algorithm
NASA Astrophysics Data System (ADS)
Sun, Hang; Han, Hong-xia
2011-08-01
Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.
A multi-dimensional procedure for BNCT filter optimization
Lille, R.A.
1998-02-01
An initial version of an optimization code utilizing two-dimensional radiation transport methods has been completed. This code is capable of predicting material compositions of a beam tube-filter geometry which can be used in a boron neutron capture therapy treatment facility to improve the ratio of the average radiation dose in a brain tumor to that in the healthy tissue surrounding the tumor. The optimization algorithm employed by the code is very straightforward. After an estimate of the gradient of the dose ratio with respect to the nuclide densities in the beam tube-filter geometry is obtained, changes in the nuclide densities are made based on: (1) the magnitude and sign of the components of the dose ratio gradient, (2) the magnitude of the nuclide densities, (3) the upper and lower bound of each nuclide density, and (4) the linear constraint that the sum of the nuclide density fractions in each material zone be less than or equal to 1.0. A local optimal solution is assumed to be found when one of the following conditions is satisfied in every material zone: (1) the maximum positive component of the gradient corresponds to a nuclide at its maximum density and the sum of the density fractions equals 1.0 or, and (2) the positive and negative components of the gradient correspond to nuclides densities at their upper and lower bounds, respectively, and the remaining components of the gradient are sufficiently small. The optimization procedure has been applied to a beam tube-filter geometry coupled to a simple tumor-patient head model and an improvement of 50% in the dose ratio was obtained.
Optimal initial perturbations for El Nino ensemble prediction with ensemble Kalman filter
Kang, In-Sik
Optimal initial perturbations for El Nino ensemble prediction with ensemble Kalman filter Yoo of an ensemble Kalman filter (EnKF). Among the initial conditions gene- rated by EnKF, ensemble members with fast. Keywords Ensemble Kalman filter Á Seasonal prediction Á Optimal initial perturbation Á Ensemble prediction
Achieving sub-nanometre particle mapping with energy-filtered TEM.
Lozano-Perez, S; de Castro Bernal, V; Nicholls, R J
2009-09-01
A combination of state-of-the-art instrumentation and optimized data processing has enabled for the first time the chemical mapping of sub-nanometre particles using energy-filtered transmission electron microscopy (EFTEM). Multivariate statistical analysis (MSA) generated reconstructed datasets where the signal from particles smaller than 1 nm in diameter was successfully isolated from the original noisy background. The technique has been applied to the characterization of oxide dispersion strengthened (ODS) reduced activation FeCr alloys, due to their relevance as structural materials for future fusion reactors. Results revealed that most nanometer-sized particles had a core-shell structure, with an Yttrium-Chromium-Oxygen-rich core and a nano-scaled Chromium-Oxygen-rich shell. This segregation to the nanoparticles caused a decrease of the Chromium dissolved in the matrix, compromising the corrosion resistance of the alloy. PMID:19505762
Loss of Fine Particle Ammonium from Denuded Nylon Filters
Yu, Xiao-Ying; Lee, Taehyoung; Ayres, Benjamin; Kreidenweis, Sonia M.; Malm, William C.; Collett, Jeffrey L.
2006-08-01
Ammonium is an important constituent of fine particulate mass in the atmosphere, but can be difficult to quantify due to possible sampling artifacts. Losses of semivolatile species such as NH4NO3 can be particularly problematic. In order to evaluate ammonium losses from aerosol particles collected on filters, a series of field experiments was conducted using denuded nylon and Teflon filters at Bondville, Illinois (February 2003), San Gorgonio, California (April 2003 and July 2004), Grand Canyon National Park, Arizona (May, 2003), Brigantine, New Jersey (November 2003), and Great Smoky Mountains National Park (NP), Tennessee (July–August 2004). Samples were collected over 24-hr periods. Losses from denuded nylon filters ranged from 10% (monthly average) in Bondville, Illinois to 28% in San Gorgonio, California in summer. Losses on individual sample days ranged from 1% to 65%. Losses tended to increase with increasing diurnal temperature and relative humidity changes and with the fraction of ambient total N(--III) (particulate NH4+ plus gaseous NH3) present as gaseous NH3. The amount of ammonium lost at most sites could be explained by the amount of NH4NO3 present in the sampled aerosol. Ammonium losses at Great Smoky Mountains NP, however, significantly exceeded the amount of NH4NO3 collected. Ammoniated organic salts are suggested as additional important contributors to observed ammonium loss at this location.
Numerical analysis of particle distribution on multi-pipe ceramic candle filters
NASA Astrophysics Data System (ADS)
Li, H. X.; Gao, B. G.; Tie, Z. X.; Sun, Z. J.; Wang, F. H.
2010-03-01
The particle distribution on the ceramic filter surface has great effect on filtration performance. The numerical simulation method is used to analyze the particle distribution near the filter surface under different operation conditions. The gas/solid two-phase flow field in the ceramic filter vessel was simulated using the Eulerian two-fluid model provided by FLUENT code. The user-defined function was loaded with the FLUTNT solver to define the interaction between the particle and the gas near the porous ceramic candle filter. The distribution of the filter cake along the filter length and around the filter circumference was analyzed. The simulation results agree well with experimental data. The simulation model can be used to predict the particle distribution and provide theory direction for the engineering application of porous ceramic filters.
Nonlinear EEG decoding based on a particle filter model.
Zhang, Jinhua; Wei, Jiongjian; Wang, Baozeng; Hong, Jun; Wang, Jing
2014-01-01
While the world is stepping into the aging society, rehabilitation robots play a more and more important role in terms of both rehabilitation treatment and nursing of the patients with neurological diseases. Benefiting from the abundant contents of movement information, electroencephalography (EEG) has become a promising information source for rehabilitation robots control. Although the multiple linear regression model was used as the decoding model of EEG signals in some researches, it has been considered that it cannot reflect the nonlinear components of EEG signals. In order to overcome this shortcoming, we propose a nonlinear decoding model, the particle filter model. Two- and three-dimensional decoding experiments were performed to test the validity of this model. In decoding accuracy, the results are comparable to those of the multiple linear regression model and previous EEG studies. In addition, the particle filter model uses less training data and more frequency information than the multiple linear regression model, which shows the potential of nonlinear decoding models. Overall, the findings hold promise for the furtherance of EEG-based rehabilitation robots. PMID:24949420
Nonlinear EEG Decoding Based on a Particle Filter Model
Hong, Jun
2014-01-01
While the world is stepping into the aging society, rehabilitation robots play a more and more important role in terms of both rehabilitation treatment and nursing of the patients with neurological diseases. Benefiting from the abundant contents of movement information, electroencephalography (EEG) has become a promising information source for rehabilitation robots control. Although the multiple linear regression model was used as the decoding model of EEG signals in some researches, it has been considered that it cannot reflect the nonlinear components of EEG signals. In order to overcome this shortcoming, we propose a nonlinear decoding model, the particle filter model. Two- and three-dimensional decoding experiments were performed to test the validity of this model. In decoding accuracy, the results are comparable to those of the multiple linear regression model and previous EEG studies. In addition, the particle filter model uses less training data and more frequency information than the multiple linear regression model, which shows the potential of nonlinear decoding models. Overall, the findings hold promise for the furtherance of EEG-based rehabilitation robots. PMID:24949420
Symmetric Phase-Only Filtering in Particle-Image Velocimetry
NASA Technical Reports Server (NTRS)
Wemet, Mark P.
2008-01-01
Symmetrical phase-only filtering (SPOF) can be exploited to obtain substantial improvements in the results of data processing in particle-image velocimetry (PIV). In comparison with traditional PIV data processing, SPOF PIV data processing yields narrower and larger amplitude correlation peaks, thereby providing more-accurate velocity estimates. The higher signal-to-noise ratios associated with the higher amplitude correlation peaks afford greater robustness and reliability of processing. SPOF also affords superior performance in the presence of surface flare light and/or background light. SPOF algorithms can readily be incorporated into pre-existing algorithms used to process digitized image data in PIV, without significantly increasing processing times. A summary of PIV and traditional PIV data processing is prerequisite to a meaningful description of SPOF PIV processing. In PIV, a pulsed laser is used to illuminate a substantially planar region of a flowing fluid in which particles are entrained. An electronic camera records digital images of the particles at two instants of time. The components of velocity of the fluid in the illuminated plane can be obtained by determining the displacements of particles between the two illumination pulses. The objective in PIV data processing is to compute the particle displacements from the digital image data. In traditional PIV data processing, to which the present innovation applies, the two images are divided into a grid of subregions and the displacements determined from cross-correlations between the corresponding sub-regions in the first and second images. The cross-correlation process begins with the calculation of the Fourier transforms (or fast Fourier transforms) of the subregion portions of the images. The Fourier transforms from the corresponding subregions are multiplied, and this product is inverse Fourier transformed, yielding the cross-correlation intensity distribution. The average displacement of the particles across a subregion results in a displacement of the correlation peak from the center of the correlation plane. The velocity is then computed from the displacement of the correlation peak and the time between the recording of the two images. The process as described thus far is performed for all the subregions. The resulting set of velocities in grid cells amounts to a velocity vector map of the flow field recorded on the image plane. In traditional PIV processing, surface flare light and bright background light give rise to a large, broad correlation peak, at the center of the correlation plane, that can overwhelm the true particle- displacement correlation peak. This has made it necessary to resort to tedious image-masking and background-subtraction procedures to recover the relatively small amplitude particle-displacement correlation peak. SPOF is a variant of phase-only filtering (POF), which, in turn, is a variant of matched spatial filtering (MSF). In MSF, one projects a first image (denoted the input image) onto a second image (denoted the filter) as part of a computation to determine how much and what part of the filter is present in the input image. MSF is equivalent to cross-correlation. In POF, the frequency-domain content of the MSF filter is modified to produce a unitamplitude (phase-only) object. POF is implemented by normalizing the Fourier transform of the filter by its magnitude. The advantage of POFs is that they yield correlation peaks that are sharper and have higher signal-to-noise ratios than those obtained through traditional MSF. In the SPOF, these benefits of POF can be extended to PIV data processing. The SPOF yields even better performance than the POF approach, which is uniquely applicable to PIV type image data. In SPOF as now applied to PIV data processing, a subregion of the first image is treated as the input image and the corresponding subregion of the second image is treated as the filter. The Fourier transforms from both the firs and second- image subregions are normalized by the square roots of their respective magnitudes.
Optimal filtering of solar images using soft morphological processing techniques
NASA Astrophysics Data System (ADS)
Marshall, S.; Fletcher, L.; Hough, K.
2006-10-01
Context: .CCD images obtained by space-based astronomy and solar physics are frequently spoiled by galactic and solar cosmic rays, and particles in the Earth's radiation belt, which produces an overlaid, often saturated, speckle. Aims: .We describe the development and application of a new image-processing technique for the removal of this noise source, and apply it to SOHO/LASCO coronagraph images. Methods: .We employ soft morphological filters, a branch of non-linear image processing originating from the field of mathematical morphology, which are particularly effective for noise removal. Results: .The soft morphological filters result in a significant improvement in image quality, and perform significantly better than other currently existing methods based on frame comparison, thresholding, or simple morphologies. Conclusions: .This is a promising and adaptable technique that should be extendable to other space-based solar and astronomy datasets.
ECHO CANCELLATION BY GLOBAL OPTIMIZATION OF KAUTZ FILTERS USING AN INFORMATION THEORETIC CRITERION
Slatton, Clint
ECHO CANCELLATION BY GLOBAL OPTIMIZATION OF KAUTZ FILTERS USING AN INFORMATION THEORETIC CRITERION to ensure global optimization. 1. INTRODUCTION Echo cancellation is an important practical problem whose parameters of an adaptive IIR filter to achieve global optimization, yet still use gradient descent. Recently
NASA Astrophysics Data System (ADS)
Zuccaro, G.; Lapenta, G.; Ferrero, F.; Maizza, G.
2011-02-01
In the diesel particulate filters technology a key aspect is represented by the properties of the particulate matter that is collected inside their structure. The work presented is focused on the development of an innovative mathematical tool based on the particle-in-cell method (PIC) for the simulation of the soot distribution inside a single channel of a diesel particulate filter. The basic fluid dynamic equations are solved for the gas phase inside the channel using a novel technique based on the solution of the same set of equations everywhere in the system including the porous medium. This approach is presented as alternative to the more conventional methods of matching conditions across the boundary of the porous region where a Darcy-like flow is developed. The motion of the soot solid particles is instead described through a particle-by-particle approach based on Newton's equations of motion. The coupling between the dynamics of the gas and that of the soot particles, i.e. between these two sub-models, is performed through the implementation of the particle-in-cell technique. This model allows the detailed simulation of the deposition and compaction of the soot inside the filter channels and its characterization in terms of density, permeability and thickness. The model then represents a unique tool for the optimization of the design of diesel particulate filters. The details of the technique implementation and some paradigmatic examples will be shown.
Numerical simulation of DPF filter for selected regimes with deposited soot particles
NASA Astrophysics Data System (ADS)
Lávi?ka, David; Kova?ík, Petr
2012-04-01
For the purpose of accumulation of particulate matter from Diesel engine exhaust gas, particle filters are used (referred to as DPF or FAP filters in the automotive industry). However, the cost of these filters is quite high. As the emission limits become stricter, the requirements for PM collection are rising accordingly. Particulate matters are very dangerous for human health and these are not invisible for human eye. They can often cause various diseases of the respiratory tract, even what can cause lung cancer. Performed numerical simulations were used to analyze particle filter behavior under various operating modes. The simulations were especially focused on selected critical states of particle filter, when engine is switched to emergency regime. The aim was to prevent and avoid critical situations due the filter behavior understanding. The numerical simulations were based on experimental analysis of used diesel particle filters.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (?p), stopband error (?s), transition band error (?t), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647
Sequential approach to multisensor resource management using particle filters
NASA Astrophysics Data System (ADS)
Penny, Dawn E.; Williams, Mark
2000-07-01
Elements from data fusion, optimisation and particle filtering are brought together to form the Multi-Sensor Fusion Management (MSFM) algorithm. The algorithm provides a framework for combining the information from multiple sensors and producing good solutions to the problem of how best to deploy/use these and/or other sensors to optimise some criteria in the future. A problem from Anti-Submarine Warfare (ASW) is taken as an example of the potential use of the algorithm. The algorithm is shown to make efficient use of a limited supply of passive sonobuoys in order to locate a submarine to the required accuracy. The results show that in the simulation the traditional strategies for sonobuoy deployment required approximately four times as many sonobuoys as the MSFM algorithm to achieve the required localisation.
Multimodal MRI Neuroimaging with Motion Compensation Based on Particle Filtering
Chen, Yu-Hui; Kim, Boklye; Meyer, Charles; Hero, Alfred
2015-01-01
Head movement during scanning impedes activation detection in fMRI studies. Head motion in fMRI acquired using slice-based Echo Planar Imaging (EPI) can be estimated and compensated by aligning the images onto a reference volume through image registration. However, registering EPI images volume to volume fails to consider head motion between slices, which may lead to severely biased head motion estimates. Slice-to-volume registration can be used to estimate motion parameters for each slice by more accurately representing the image acquisition sequence. However, accurate slice to volume mapping is dependent on the information content of the slices: middle slices are information rich, while edge slides are information poor and more prone to distortion. In this work, we propose a Gaussian particle filter based head motion tracking algorithm to reduce the image misregistration errors. The algorithm uses a dynamic state space model of head motion with an observation equation that models continuous slice acquisitio...
Particle filter and EnKF as data assimilation methods for the Kuramoto-Sivashinsky
Particle filter and EnKF as data assimilation methods for the Kuramoto-Sivashinsky Equation M adjoint methods do- main and only Chorin and Krause [26] tested it using a sequential Bayesian filter approach. In this work we compare the usual ensemble Kalman filter (EnKF) ap- proach versus versions
Estimation of the error for small-sample optimal binary filter design using prior knowledge
Sabbagh, David L
1999-01-01
Optimal binary filters estimate an unobserved ideal quantity from observed quantities. Optimality is with respect to some error criterion, which is usually mean absolute error MAE (or equivalently mean square error) for the binary values. Both...
MCMC-Based Particle Filtering for Tracking a Variable Number of Interacting Targets
Khan, Zia
1 MCMC-Based Particle Filtering for Tracking a Variable Number of Interacting Targets Zia Khan with complicated target interactions. Index Terms particle filters, multi-target tracking, Markov random fields-mail: {zkhan,tucker,dellaert}@cc.gatech.edu. Phone: ZK: (404)385-6602 TB:(404) 385-2861 FD:(404)385-2923 Fax
LOCATION-CONSTRAINED PARTICLE FILTER FOR RSSI-BASED INDOOR HUMAN POSITIONING AND TRACKING SYSTEM
Wu, An-Yeu "Andy"
LOCATION-CONSTRAINED PARTICLE FILTER FOR RSSI-BASED INDOOR HUMAN POSITIONING AND TRACKING SYSTEM-Constrained Particle Filter (LC-PF) for Radio Signal Strength Indication (RSSI) based indoor localization system. Based of the NLOS propagation. Currently Radio Signal Strength Indication (RSSI) based location system is now
Tracking Football Player Movement From a Single Moving Camera Using Particle Filters
Demiris, Yiannis
Tracking Football Player Movement From a Single Moving Camera Using Particle Filters Anthony Soccer, Tracking, Particle Filter Abstract This paper deals with the problem of tracking football players in a football match using data from a single mov- ing camera. Tracking footballers from a single video source
Robust Tracking-by-Detection using a Detector Confidence Particle Filter Michael D. Breitenstein1
Robust Tracking-by-Detection using a Detector Confidence Particle Filter Michael D. Breitenstein1 approach for multi-person tracking- by-detection in a particle filtering framework. In addition to final high-confidence detections, our algorithm uses the continuous confidence of pedestrian detectors
Actin Filament Tracking Based on Particle Filters and Stretching Open Active Contour
Huang, Xiaolei
Actin Filament Tracking Based on Particle Filters and Stretching Open Active Contour Models a novel algorithm for actin filament tracking and elongation measurement. Particle Filters (PF-dimensional state space while naturally integrating filament body constraints to tip estimation. Our algorithm
Effect of embedded unbiasedness on discrete-time optimal FIR filtering estimates
NASA Astrophysics Data System (ADS)
Zhao, Shunyi; Shmaliy, Yuriy S.; Liu, Fei; Ibarra-Manzano, Oscar; Khan, Sanowar H.
2015-12-01
Unbiased estimation is an efficient alternative to optimal estimation when the noise statistics are not fully known and/or the model undergoes temporary uncertainties. In this paper, we investigate the effect of embedded unbiasedness (EU) on optimal finite impulse response (OFIR) filtering estimates of linear discrete time-invariant state-space models. A new OFIR-EU filter is derived by minimizing the mean square error (MSE) subject to the unbiasedness constraint. We show that the OFIR-UE filter is equivalent to the minimum variance unbiased FIR (UFIR) filter. Unlike the OFIR filter, the OFIR-EU filter does not require the initial conditions. In terms of accuracy, the OFIR-EU filter occupies an intermediate place between the UFIR and OFIR filters. Contrary to the UFIR filter which MSE is minimized by the optimal horizon of N opt points, the MSEs in the OFIR-EU and OFIR filters diminish with N and these filters are thus full-horizon. Based upon several examples, we show that the OFIR-UE filter has higher immunity against errors in the noise statistics and better robustness against temporary model uncertainties than the OFIR and Kalman filters.
Pilz, T.
1995-12-31
For power generation with combined cycles or production of so called advanced materials by vapor phase synthesis particle separation at high temperatures is of crucial importance. There, systems working with rigid ceramic barrier filters are either of thermodynamical benefit to the process or essential for producing materials with certain properties. A hot gas filter test rig has been installed to investigate the influence of different parameters e.g. temperature, dust properties, filter media and filtration and regeneration conditions into particle separation at high temperatures. These tests were conducted both with commonly used filter candles and with filter discs made out of the same material. The filter disc is mounted at one side of the test rig. That is why both filters face the same raw gas conditions. The filter disc is flown through by a cross flow arrangement. This bases upon the conviction that for comparison of filtration characteristics of candles with filter discs or other model filters the structure of the dust cakes have to be equal. This way of conducting investigations into the influence of the above mentioned parameters on dust separation at high temperatures follows the new standard VDI 3926. There, test procedures for the characterization of filter media at ambient conditions are prescribed. The paper mainly focuses then on the influence of particle properties (e.g. stickiness etc.) upon the filtration and regeneration behavior of fly ashes with rigid ceramic filters.
PARTICLE REMOVAL AND HEAD LOSS DEVELOPMENT IN BIOLOGICAL FILTERS
The physical performance of granular media filters was studied under pre-chlorinated, backwash-chlorinated, and nonchlorinated conditions. Overall, biological filteration produced a high-quality water. Although effluent turbidities showed littleer difference between the perform...
Optimization of the performances of correlation filters by pre-processing the input plane
NASA Astrophysics Data System (ADS)
Bouzidi, F.; Elbouz, M.; Alfalou, A.; Brosseau, C.; Fakhfakh, A.
2016-01-01
We report findings on the optimization of the performances of correlation filters. First, we propound and validate an optimization of ROC curves adapted to correlation technique. Then, analysis suggests that a pre-processing of the input plane leads to a compromise between the robustness of the adapted filter and the discrimination of the inverse filter for face recognition applications. Rewardingly, our technical results demonstrate that this method is remarkably efficient to increase the performances of a VanderLugt correlator.
Full-field particle velocimetry with a photorefractive optical novelty filter
Woerdemann, Mike; Holtmann, Frank; Denz, Cornelia
2008-07-14
We utilize the finite time constant of a photorefractive optical novelty filter microscope to access full-field velocity information of fluid flows on microscopic scales. In contrast to conventional methods such as particle image velocimetry and particle tracking velocimetry, not only image acquisition of the tracer particle field but also evaluation of tracer particle velocities is done all-optically by the novelty filter. We investigate the velocity dependent parameters of two-beam coupling based optical novelty filters and demonstrate calibration and application of a photorefractive velocimetry system. Theoretical and practical limits to the range of accessible velocities are discussed.
Multi-path light extinction approach for high efficiency filtered oil particle measurement
NASA Astrophysics Data System (ADS)
Pengfei, Yin; Jun, Chen; Huinan, Yang; Lili, Liu; Xiaoshu, Cai
2014-04-01
This work present a multi-pathlight extinction approach to determine the oil mist filter efficiency based on measuring the concentration and size distribution of oil particles. Light extinction spectrum(LES) technique was used to retrieve the oil particle size distribution and concentration. The multi-path measuring cell was designed to measure low concentration and fine particles after filtering. The path-length of the measuring cell calibrated as 200 cm. The results of oil particle size with oil mist filtering were obtained as D32 = 0.9?m. Cv=1.6×10-8.
Particle filtering methods for georeferencing panoramic image sequence in complex urban scenes
NASA Astrophysics Data System (ADS)
Ji, Shunping; Shi, Yun; Shan, Jie; Shao, Xiaowei; Shi, Zhongchao; Yuan, Xiuxiao; Yang, Peng; Wu, Wenbin; Tang, Huajun; Shibasaki, Ryosuke
2015-07-01
Georeferencing image sequences is critical for mobile mapping systems. Traditional methods such as bundle adjustment need adequate and well-distributed ground control points (GCP) when accurate GPS data are not available in complex urban scenes. For applications of large areas, automatic extraction of GCPs by matching vehicle-born image sequences with geo-referenced ortho-images will be a better choice than intensive GCP collection with field surveying. However, such image matching generated GCP's are highly noisy, especially in complex urban street environments due to shadows, occlusions and moving objects in the ortho images. This study presents a probabilistic solution that integrates matching and localization under one framework. First, a probabilistic and global localization model is formulated based on the Bayes' rules and Markov chain. Unlike many conventional methods, our model can accommodate non-Gaussian observation. In the next step, a particle filtering method is applied to determine this model under highly noisy GCP's. Owing to the multiple hypotheses tracking represented by diverse particles, the method can balance the strength of geometric and radiometric constraints, i.e., drifted motion models and noisy GCP's, and guarantee an approximately optimal trajectory. Carried out tests are with thousands of mobile panoramic images and aerial ortho-images. Comparing with the conventional extended Kalman filtering and a global registration method, the proposed approach can succeed even under more than 80% gross errors in GCP's and reach a good accuracy equivalent to the traditional bundle adjustment with dense and precise control.
Estimation of Tumor Size Evolution Using Particle Filters.
Costa, Jose M J; Orlande, Helcio R B; Velho, Haroldo F Campos; de Pinho, Suani T R; Dulikravich, George S; Cotta, Renato M; da Cunha Neto, Silvio H
2015-07-01
Cancer is characterized by the uncontrolled growth of cells with the ability of invading local organs and/or tissues and of spreading to other sites. Several kinds of mathematical models have been proposed in the literature, involving different levels of refinement, for the evolution of tumors and their interactions with chemotherapy drugs. In this article, we present the solution of a state estimation problem for tumor size evolution. A system of nonlinear ordinary differential equations is used as the state evolution model, which involves as state variables the numbers of tumor, normal and angiogenic cells, as well as the masses of the chemotherapy and anti-angiogenic drugs in the body. Measurements of the numbers of tumor and normal cells are considered available for the inverse analysis. Parameters appearing in the formulation of the state evolution model are treated as Gaussian random variables and their uncertainties are taken into account in the estimation of the state variables, by using an algorithm based on the auxiliary sampling importance resampling particle filter. Test cases are examined in the article dealing with a chemotherapy protocol for pancreatic cancer. PMID:25973723
NASA Astrophysics Data System (ADS)
Chen, Jing; Liu, Tundong; Jiang, Hao
2016-01-01
A Pareto-based multi-objective optimization approach is proposed to design multichannel FBG filters. Instead of defining a single optimal objective, the proposed method establishes the multi-objective model by taking two design objectives into account, which are minimizing the maximum index modulation and minimizing the mean dispersion error. To address this optimization problem, we develop a two-stage evolutionary computation approach integrating an elitist non-dominated sorting genetic algorithm (NSGA-II) and technique for order preference by similarity to ideal solution (TOPSIS). NSGA-II is utilized to search for the candidate solutions in terms of both objectives. The obtained results are provided as Pareto front. Subsequently, the best compromise solution is determined by the TOPSIS method from the Pareto front according to the decision maker's preference. The design results show that the proposed approach yields a remarkable reduction of the maximum index modulation and the performance of dispersion spectra of the designed filter can be optimized simultaneously.
ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435
Stillo, Andrew; Ricketts, Craig I.
2013-07-01
High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacity of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1 Code sections FC and FK[1]. (authors)
Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, Brian
2013-01-01
The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.
NASA Astrophysics Data System (ADS)
Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo; Awwal, Abdul
2015-03-01
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.
Bak, Claus Leth
for the LCL filters and LCL with multituned LC traps. In short, the optimization problem reduces to the proper for verification. Index Terms--Harmonic passive filters, LCL filter, resonance damping, trap filter, voltage is the LCL filter, which became well accepted and widely used as an in- terface between renewable energy
´, Senior Member, IEEE Abstract--A new particle filtering detector (PFD) is proposed for blind signal detection, least mean square, particle filtering detector, recursive least square. I. INTRODUCTIONIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 7, JULY 2004 1891 A Blind Particle Filtering
NASA Astrophysics Data System (ADS)
Mattern, Jann Paul; Dowd, Michael; Fennel, Katja
2013-05-01
We assimilate satellite observations of surface chlorophyll into a three-dimensional biological ocean model in order to improve its state estimates using a particle filter referred to as sequential importance resampling (SIR). Particle Filters represent an alternative to other, more commonly used ensemble-based state estimation techniques like the ensemble Kalman filter (EnKF). Unlike the EnKF, Particle Filters do not require normality assumptions about the model error structure and are thus suitable for highly nonlinear applications. However, their application in oceanographic contexts is typically hampered by the high dimensionality of the model's state space. We apply SIR to a high-dimensional model with a small ensemble size (20) and modify the standard SIR procedure to avoid complications posed by the high dimensionality of the model state. Two extensions to the SIR include a simple smoother to deal with outliers in the observations, and state-augmentation which provides the SIR with parameter memory. Our goal is to test the feasibility of biological state estimation with SIR for realistic models. For this purpose we compare the SIR results to a model simulation with optimal parameters with respect to the same set of observations. By running replicates of our main experiments, we assess the robustness of our SIR implementation. We show that SIR is suitable for satellite data assimilation into biological models and that both extensions, the smoother and state-augmentation, are required for robust results and improved fit to the observations.
Goodarz Ahmadi
2002-07-01
In this project, a computational modeling approach for analyzing flow and ash transport and deposition in filter vessels was developed. An Eulerian-Lagrangian formulation for studying hot-gas filtration process was established. The approach uses an Eulerian analysis of gas flows in the filter vessel, and makes use of the Lagrangian trajectory analysis for the particle transport and deposition. Particular attention was given to the Siemens-Westinghouse filter vessel at Power System Development Facility in Wilsonville in Alabama. Details of hot-gas flow in this tangential flow filter vessel are evaluated. The simulation results show that the rapidly rotation flow in the spacing between the shroud and the vessel refractory acts as cyclone that leads to the removal of a large fraction of the larger particles from the gas stream. Several alternate designs for the filter vessel are considered. These include a vessel with a short shroud, a filter vessel with no shroud and a vessel with a deflector plate. The hot-gas flow and particle transport and deposition in various vessels are evaluated. The deposition patterns in various vessels are compared. It is shown that certain filter vessel designs allow for the large particles to remain suspended in the gas stream and to deposit on the filters. The presence of the larger particles in the filter cake leads to lower mechanical strength thus allowing for the back-pulse process to more easily remove the filter cake. A laboratory-scale filter vessel for testing the cold flow condition was designed and fabricated. A laser-based flow visualization technique is used and the gas flow condition in the laboratory-scale vessel was experimental studied. A computer model for the experimental vessel was also developed and the gas flow and particle transport patterns are evaluated.
MCMC-based particle filtering for tracking a variable number of interacting targets.
Khan, Zia; Balch, Tucker; Dellaert, Frank
2005-11-01
We describe a particle filter that effectively deals with interacting targets--targets that are influenced by the proximity and/or behavior of other targets. The particle filter includes a Markov random field (MRF) motion prior that helps maintain the identity of targets throughout an interaction, significantly reducing tracker failures. We show that this MRF prior can be easily implemented by including an additional interaction factor in the importance weights of the particle filter. However, the computational requirements of the resulting multitarget filter render it unusable for large numbers of targets. Consequently, we replace the traditional importance sampling step in the particle filter with a novel Markov chain Monte Carlo (MCMC) sampling step to obtain a more efficient MCMC-based multitarget filter. We also show how to extend this MCMC-based filter to address a variable number of interacting targets. Finally, we present both qualitative and quantitative experimental results, demonstrating that the resulting particle filters deal efficiently and effectively with complicated target interactions. PMID:16285378
Optimizing filtered backprojection reconstruction for a breast tomosynthesis prototype device
NASA Astrophysics Data System (ADS)
Mertelmeier, Thomas; Orman, Jasmina; Haerer, Wolfgang; Dudam, Mithun K.
2006-03-01
Digital breast tomosynthesis is a new technique intended to overcome the limitations of conventional projection mammography by reconstructing slices through the breast from projection views acquired from different angles with respect to the breast. We formulate a general theory of filtered backprojection reconstruction for linear tomosynthesis. The filtering step consists of an MTF inversion filter, a spectral filter, and a slice thickness filter. In this paper the method is applied first to simulated data to understand the basic effects of the various filtering steps. We then demonstrate the impact of the filter functions with simulated projections and with clinical data acquired with a research breast tomosynthesis system.** With this reconstruction method the image quality can be controlled regarding noise and spatial resolution. In a wide range of spatial frequencies the slice thickness can be kept constant and artifacts caused by the incompleteness of the data can be suppressed.
Adapting the Sample Size in Particle Filters Through KLD-Sampling
Washington at SeattleUniversity of
Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer filters by adapting the size of sample sets during the estimation pro- cess. The key idea of the KLD-sampling method is to bound the approximation error intro- duced by the sample-based representation
Schulz, Dirk
Tracking Multiple Moving Targets with a Mobile Robot using Particle Filters and Statistical Data to tracking multiple moving objects. JPDAFs compute a Bayesian es- timate of the correspondence between approaches to tracking multiple targets apply Kalman filters to estimate the states of the individual objects
A GIRSANOV MONTE CARLO APPROACH TO PARTICLE FILTERING FOR MULTI-TARGET TRACKING
and signal processing, air traffic control and GPS navigation [10]. The tracking problem consists filters for multi- target tracking. The suggested approach is based on Girsanov's change of measure improve significantly the performance of a particle filter. Introduction Multi-target tracking
Anderson, Andrew D. (Andrew David)
2006-01-01
This thesis considers possible solutions to sample impoverishment, a well-known failure mode of the Rao-Blackwellized particle filter (RBPF) in simultaneous localization and mapping (SLAMI) situations that arises when ...
NASA Astrophysics Data System (ADS)
Shmaliy, Yuriy S.; Ibarra-Manzano, Oscar
2012-12-01
We address p-shift finite impulse response optimal (OFIR) and unbiased (UFIR) algorithms for predictive filtering ( p > 0), filtering ( p = 0), and smoothing filtering ( p < 0) at a discrete point n over N neighboring points. The algorithms were designed for linear time-invariant state-space signal models with white Gaussian noise. The OFIR filter self-determines the initial mean square state function by solving the discrete algebraic Riccati equation. The UFIR one represented both in the batch and iterative Kalman-like forms does not require the noise covariances and initial errors. An example of applications is given for smoothing and predictive filtering of a two-state polynomial model. Based upon this example, we show that exact optimality is redundant when N ? 1 and still a nice suboptimal estimate can fairly be provided with a UFIR filter at a much lower cost.
Franke, Felix; Quian Quiroga, Rodrigo; Hierlemann, Andreas; Obermayer, Klaus
2015-06-01
Spike sorting, i.e., the separation of the firing activity of different neurons from extracellular measurements, is a crucial but often error-prone step in the analysis of neuronal responses. Usually, three different problems have to be solved: the detection of spikes in the extracellular recordings, the estimation of the number of neurons and their prototypical (template) spike waveforms, and the assignment of individual spikes to those putative neurons. If the template spike waveforms are known, template matching can be used to solve the detection and classification problem. Here, we show that for the colored Gaussian noise case the optimal template matching is given by a form of linear filtering, which can be derived via linear discriminant analysis. This provides a Bayesian interpretation for the well-known matched filter output. Moreover, with this approach it is possible to compute a spike detection threshold analytically. The method can be implemented by a linear filter bank derived from the templates, and can be used for online spike sorting of multielectrode recordings. It may also be applicable to detection and classification problems of transient signals in general. Its application significantly decreases the error rate on two publicly available spike-sorting benchmark data sets in comparison to state-of-the-art template matching procedures. Finally, we explore the possibility to resolve overlapping spikes using the template matching outputs and show that they can be resolved with high accuracy. PMID:25652689
Backus, Sterling J. (Erie, CO); Kapteyn, Henry C. (Boulder, CO)
2007-07-10
A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.
Statistical Design and Optimization for Adaptive Post-silicon Tuning of MEMS Filters
Li, Xin
Statistical Design and Optimization for Adaptive Post-silicon Tuning of MEMS Filters Fa Wang, Gokce of microelectro-mechanical systems (MEMS) for RF (radio frequency) applications. In this paper we describe a novel technique of adaptive post-silicon tuning to reliably design MEMS filters that are robust to process
Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization
Cho, Sung-Bae
Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization t i c l e i n f o Keywords: Fingerprint image generation Evolutionary algorithm Image filters Input pressure a b s t r a c t Constructing a fingerprint database is important to evaluate the performance
Cosmological parameter estimation using particle swarm optimization
NASA Astrophysics Data System (ADS)
Prasad, Jayanti; Souradeep, Tarun
2012-06-01
Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit ? cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.
Optimization of filtering schemes for broadband astro-combs
Walsworth, Ronald L.
nonlinear optical fiber to spectrally broaden the filtered and amplified narrowband frequency comb a narrowband, femtosecond laser frequency comb ("source-comb"), one must integrate the source-comb with three). We find that even a small nonlinear phase can reduce suppression of filtered comb lines, and increase
An optimal modification of a Kalman filter for time scales
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
2003-01-01
The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.
Expedite Particle Swarm Optimization Algorithm (EPSO) for Optimization of MSA
NASA Astrophysics Data System (ADS)
Rathi, Amit; Vijay, Ritu
This paper presents a new designing method of Rectangular patch Microstrip Antenna using an Artificial searches Algorithm with some constraints. It requires two stages for designing. In first stage, bandwidth of MSA is modeled using bench Mark function. In second stage, output of first stage give to modified Artificial search Algorithm which is Particle Swarm Algorithm (PSO) as input and get output in the form of five parameter- dimensions width, frequency range, dielectric loss tangent, length over a ground plane with a substrate thickness and electrical thickness. In PSO Cognition, factor and Social learning Factor give very important effect on balancing the local search and global search in PSO. Basing the modification of cognition factor and social learning factor, this paper presents the strategy that at the starting process cognition-learning factor has more effect then social learning factor. Gradually social learning factor has more impact after learning cognition factor for find out global best. The aim is to find out under above circumstances these modifications in PSO can give better result for optimization of microstrip Antenna (MSA).
Assessing consumption of bioactive micro-particles by filter-feeding Asian carp
Jensen, Nathan R.; Amberg, Jon J.; Luoma, James A.; Walleser, Liza R.; Gaikowski, Mark P.
2012-01-01
Silver carp Hypophthalmichthys molitrix (SVC) and bighead carp H. nobilis (BHC) have impacted waters in the US since their escape. Current chemical controls for aquatic nuisance species are non-selective. Development of a bioactive micro-particle that exploits filter-feeding habits of SVC or BHC could result in a new control tool. It is not fully understood if SVC or BHC will consume bioactive micro-particles. Two discrete trials were performed to: 1) evaluate if SVC and BHC consume the candidate micro-particle formulation; 2) determine what size they consume; 3) establish methods to evaluate consumption of filter-feeders for future experiments. Both SVC and BHC were exposed to small (50-100 ?m) and large (150-200 ?m) micro-particles in two 24-h trials. Particles in water were counted electronically and manually (microscopy). Particles on gill rakers were counted manually and intestinal tracts inspected for the presence of micro-particles. In Trial 1, both manual and electronic count data confirmed reductions of both size particles; SVC appeared to remove more small particles than large; more BHC consumed particles; SVC had fewer overall particles in their gill rakers than BHC. In Trial 2, electronic counts confirmed reductions of both size particles; both SVC and BHC consumed particles, yet more SVC consumed micro-particles compared to BHC. Of the fish that ate micro-particles, SVC consumed more than BHC. It is recommended to use multiple metrics to assess consumption of candidate micro-particles by filter-feeders when attempting to distinguish differential particle consumption. This study has implications for developing micro-particles for species-specific delivery of bioactive controls to help fisheries, provides some methods for further experiments with bioactive micro-particles, and may also have applications in aquaculture.
Optease Vena Cava Filter Optimal Indwelling Time and Retrievability
Rimon, Uri Bensaid, Paul Golan, Gil Garniek, Alexander Khaitovich, Boris; Dotan, Zohar; Konen, Eli
2011-06-15
The purpose of this study was to assess the indwelling time and retrievability of the Optease IVC filter. Between 2002 and 2009, a total of 811 Optease filters were inserted: 382 for prophylaxis in multitrauma patients and 429 for patients with venous thromboembolic (VTE) disease. In 139 patients [97 men and 42 women; mean age, 36 (range, 17-82) years], filter retrieval was attempted. They were divided into two groups to compare change in retrieval policy during the years: group A, 60 patients with filter retrievals performed before December 31 2006; and group B, 79 patients with filter retrievals from January 2007 to October 2009. A total of 128 filters were successfully removed (57 in group A, and 71 in group B). The mean filter indwelling time in the study group was 25 (range, 3-122) days. In group A the mean indwelling time was 18 (range, 7-55) days and in group B 31 days (range, 8-122). There were 11 retrieval failures: 4 for inability to engage the filter hook and 7 for inability to sheathe the filter due to intimal overgrowth. The mean indwelling time of group A retrieval failures was 16 (range, 15-18) days and in group B 54 (range, 17-122) days. Mean fluoroscopy time for successful retrieval was 3.5 (range, 1-16.6) min and for retrieval failures 25.2 (range, 7.2-62) min. Attempts to retrieve the Optease filter can be performed up to 60 days, but more failures will be encountered with this approach.
Capellari, Giovanni; Eftekhar Azam, Saeed; Mariani, Stefano
2015-01-01
Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated. PMID:26703615
Optimal design of a generalized compound eye particle detector array
Nehorai, Arye
Optimal design of a generalized compound eye particle detector array Arye Nehoraia, Zhi Liua ABSTRACT We analyze the performance of a novel detector array1 for detecting and localizing particle shape with a lens on top and a particle detectors subarray inside. The array's configuration is inspired
Distributed Adaptive Particle Swarm Optimizer in Dynamic Environment
Cui, Xiaohui; Potok, Thomas E
2007-01-01
In the real world, we have to frequently deal with searching and tracking an optimal solution in a dynamical and noisy environment. This demands that the algorithm not only find the optimal solution but also track the trajectory of the changing solution. Particle Swarm Optimization (PSO) is a population-based stochastic optimization technique, which can find an optimal, or near optimal, solution to a numerical and qualitative problem. In PSO algorithm, the problem solution emerges from the interactions between many simple individual agents called particles, which make PSO an inherently distributed algorithm. However, the traditional PSO algorithm lacks the ability to track the optimal solution in a dynamic and noisy environment. In this paper, we present a distributed adaptive PSO (DAPSO) algorithm that can be used for tracking a non-stationary optimal solution in a dynamically changing and noisy environment.
HYBRID PARTICLE FILTER AND MEAN SHIFT TRACKER WITH ADAPTIVE TRANSITION MODEL
Cavallaro, Andrea
HYBRID PARTICLE FILTER AND MEAN SHIFT TRACKER WITH ADAPTIVE TRANSITION MODEL Emilio Maggio. To overcome these problems, the proposed tracker first produces a smaller number of samples than Particle pre- dicts the state based on adaptive variances. Experimental results show that the combined tracker
EFFICIENT PARTICLE-PAIR FILTERING FOR ACCELERATION OF MOLECULAR DYNAMICS SIMULATION
Herbordt, Martin
EFFICIENT PARTICLE-PAIR FILTERING FOR ACCELERATION OF MOLECULAR DYNAMICS SIMULATION Matt Chiu ABSTRACT The acceleration of molecular dynamics (MD) simulations using high performance reconfigurable: determining the short-range force between particle pairs. In particular, we present the first FPGA study
Robustness of optimal binary filters: analysis and design
Grigoryan, Artyom M
1999-01-01
designed. This problem is crucial for practical application since filters will always be applied to image processes that deviate from design processes. The present work treats the general concept of robust binary alters in the Bayesian framework, derives...
Leach, R.R.; Schultz, C.; Dowla, F.
1997-07-15
Development of a worldwide network to monitor seismic activity requires deployment of seismic sensors in areas which have not been well studied or may have from available recordings. Development and testing of detection and discrimination algorithms requires a robust representative set of calibrated seismic events for a given region. Utilizing events with poor signal-to-noise (SNR) can add significant numbers to usable data sets, but these events must first be adequately filtered. Source and path effects can make this a difficult task as filtering demands are highly varied as a function of distance, event magnitude, bearing, depth etc. For a given region, conventional methods of filter selection can be quite subjective and may require intensive analysis of many events. In addition, filter parameters are often overly generalized or contain complicated switching. We have developed a method to provide an optimized filter for any regional or teleseismically recorded event. Recorded seismic signals contain arrival energy which is localized in frequency and time. Localized temporal signals whose frequency content is different from the frequency content of the pre-arrival record are identified using rms power measurements. The method is based on the decomposition of a time series into a set of time series signals or scales. Each scale represents a time-frequency band with a constant Q. SNR is calculated for a pre-event noise window and for a window estimated to contain the arrival. Scales with high SNR are used to indicate the band pass limits for the optimized filter.The results offer a significant improvement in SNR particularly for low SNR events. Our method provides a straightforward, optimized filter which can be immediately applied to unknown regions as knowledge of the geophysical characteristics is not required. The filtered signals can be used to map the seismic frequency response of a region and may provide improvements in travel-time picking, bearing estimation regional characterization, and event detection. Results are shown for a set of low SNR events as well as 92 regional and teleseismic events in the Middle East.
Optimal Filter Estimation for Lucas-Kanade Optical Flow
Sharmin, Nusrat; Brad, Remus
2012-01-01
Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.
E. Donth
2009-12-11
Culminating-point filter construction for particle points is distinguished from torus construction for wave functions in the tangent objects of their neighborhoods. Both constructions are not united by a general manifold diffeomorphism, but are united by a map of a hidden conformal $S^{1}\\times S^{3}$ charge with harmonic (Maxwell) potentials into a physical space formed by culminating points, tangent objects, and Feynman connections. The particles are obtained from three classes of eigensolutions of the homogeneous potential equations on $S^{1}\\times S^{3}$. The map of the $u(2)$ invariant vector fields into the Dirac phase factors of the connections yields the electro-weak Lagrangian with explicit mass operators for the massive leptons. The spectrum of massive particles is restricted by the small, manageable number of eigensolution classes and an instability of the model for higher mass values. This instability also defines the huge numbers of filter elements needed for the culminating points. Weinberg angle, current coupling constant, and lepton masses are calculated or estimated from the renormalization of filter properties. Consequences for particle astrophysics follow, on the one hand, from the restriction of particle classes and, on the other hand, from the suggestion of new particles from the three classes e.g. of dark matter, of a confinon for the hadrons, and of a prebaryon. Definitely excluded are e.g. SUSY constructions, Higgs particles, and a quark gluon plasma: three-piece phenoma from the confinons are always present.
Kelley, C. T. "Tim"
Application of IFFCO to Optimization of Natural Gas Pipelines 12 4 Hidden Constrants 15 4.1 DefinitionImplicit Filtering for Constrained Optimization and Applications to Problems in the Natural Gas Pipeline Industry 1 Alton Patrick Department of Mathematics Center for Research in Scientific Computation
NASAL FILTERING OF FINE PARTICLES IN CHILDREN VS. ADULTS
Nasal efficiency for removing fine particles may be affected by developmental changes in nasal structure associated with age. In healthy Caucasian children (age 6-13, n=17) and adults (age 18-28, n=11) we measured the fractional deposition (DF) of fine particles (1 and 2um MMAD)...
Optimized Loading for Particle-in-cell Gyrokinetic Simulations
J.L.V. Lewandowski
2004-05-13
The problem of particle loading in particle-in-cell gyrokinetic simulations is addressed using a quadratic optimization algorithm. Optimized loading in configuration space dramatically reduces the short wavelength modes in the electrostatic potential that are partly responsible for the non-conservation of total energy; further, the long wavelength modes are resolved with good accuracy. As a result, the conservation of energy for the optimized loading is much better that the conservation of energy for the random loading. The method is valid for any geometry and can be coupled to optimization algorithms in velocity space.
Sun, W Y
1993-04-01
This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.
Optimized visualization of phase objects with semiderivative real filters
NASA Astrophysics Data System (ADS)
Sagan, Arkadiusz; Kowalczyk, Marek; Szoplik, Tomasz
2004-01-01
There is a need for a frequency-domain real filter that visualizes pure-phase objects with thickness either considerably smaller or much bigger than 2? rad and gives output image irradiance proportional to the first derivative of object phase function for a wide range of phase gradients. We propose to construct a nonlinearly graded filter as a combination of Foucault and the square-root filters. The square root filter in frequency plane corresponds to the semiderivative in object space. Between the two half-planes with binary values of amplitude transmittance a segment with nonlinearly varying transmittance is located. Within this intermediate sector the amplitude transmittance is given with a biased antisymmetrical function whose positive and negative frequency branches are proportional to the square-root of spatial frequencies contained therein. Our simulations show that the modified square root filter visualizes both thin and thick pure phase objects with phase gradients from 0.6? up to more than 60? rad/mm.
Chaotic Particle Swarm Optimization with Mutation for Classification
Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza
2015-01-01
In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937
Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.
2011-01-01
An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (? 32 cm?1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm?1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445
Optimization of soft-morphological filters by genetic algorithms
NASA Astrophysics Data System (ADS)
Huttunen, Heikki; Kuosmanen, Pauli; Koskinen, Lasse; Astola, Jaakko T.
1994-06-01
In this work we present a new approach to robust image modeling. the proposed method is based on M-estimation algorithms. However, unlike in other M-estimator based image processing algorithms, the new algorithm takes into consideration spatial relations between picture elements. The contribution of the sample to the model depends not only on the current residual of that sample, but also on the neighboring residuals. In order to test the proposed algorithm we apply it to an image filtering problem, where images are modeled as piecewise polynomials. We show that the filter based on our algorithm has excellent detail preserving properties while suppressing additive Gaussian and impulsive noise very efficiently.
Removal of Particles and Acid Gases (SO2 or HCl) with a Ceramic Filter by Addition of Dry Sorbents
Hemmer, G.; Kasper, G.; Wang, J.; Schaub, G.
2002-09-20
The present investigation intends to add to the fundamental process design know-how for dry flue gas cleaning, especially with respect to process flexibility, in cases where variations in the type of fuel and thus in concentration of contaminants in the flue gas require optimization of operating conditions. In particular, temperature effects of the physical and chemical processes occurring simultaneously in the gas-particle dispersion and in the filter cake/filter medium are investigated in order to improve the predictive capabilities for identifying optimum operating conditions. Sodium bicarbonate (NaHCO{sub 3}) and calcium hydroxide (Ca(OH){sub 2}) are known as efficient sorbents for neutralizing acid flue gas components such as HCl, HF, and SO{sub 2}. According to their physical properties (e.g. porosity, pore size) and chemical behavior (e.g. thermal decomposition, reactivity for gas-solid reactions), optimum conditions for their application vary widely. The results presented concentrate on the development of quantitative data for filtration stability and overall removal efficiency as affected by operating temperature. Experiments were performed in a small pilot unit with a ceramic filter disk of the type Dia-Schumalith 10-20 (Fig. 1, described in more detail in Hemmer 2002 and Hemmer et al. 1999), using model flue gases containing SO{sub 2} and HCl, flyash from wood bark combustion, and NaHCO{sub 3} as well as Ca(OH){sub 2} as sorbent material (particle size d{sub 50}/d{sub 84} : 35/192 {micro}m, and 3.5/16, respectively). The pilot unit consists of an entrained flow reactor (gas duct) representing the raw gas volume of a filter house and the filter disk with a filter cake, operating continuously, simulating filter cake build-up and cleaning of the filter medium by jet pulse. Temperatures varied from 200 to 600 C, sorbent stoichiometric ratios from zero to 2, inlet concentrations were on the order of 500 to 700 mg/m{sup 3}, water vapor contents ranged from zero to 20 vol%. The experimental program with NaHCO{sub 3} is listed in Table 1. In addition, model calculations were carried out based on own and published experimental results that estimate residence time and temperature effects on removal efficiencies.
Filter performance of n99 and n95 facepiece respirators against viruses and ultrafine particles.
Eninger, Robert M; Honda, Takeshi; Adhikari, Atin; Heinonen-Tanski, Helvi; Reponen, Tiina; Grinshpun, Sergey A
2008-07-01
The performance of three filtering facepiece respirators (two models of N99 and one N95) challenged with an inert aerosol (NaCl) and three virus aerosols (enterobacteriophages MS2 and T4 and Bacillus subtilis phage)-all with significant ultrafine components-was examined using a manikin-based protocol with respirators sealed on manikins. Three inhalation flow rates, 30, 85, and 150 l min(-1), were tested. The filter penetration and the quality factor were determined. Between-respirator and within-respirator comparisons of penetration values were performed. At the most penetrating particle size (MPPS), >3% of MS2 virions penetrated through filters of both N99 models at an inhalation flow rate of 85 l min(-1). Inhalation airflow had a significant effect upon particle penetration through the tested respirator filters. The filter quality factor was found suitable for making relative performance comparisons. The MPPS for challenge aerosols was <0.1 mum in electrical mobility diameter for all tested respirators. Mean particle penetration (by count) was significantly increased when the size fraction of <0.1 mum was included as compared to particles >0.1 mum. The filtration performance of the N95 respirator approached that of the two models of N99 over the range of particle sizes tested ( approximately 0.02 to 0.5 mum). Filter penetration of the tested biological aerosols did not exceed that of inert NaCl aerosol. The results suggest that inert NaCl aerosols may generally be appropriate for modeling filter penetration of similarly sized virions. PMID:18477653
Khan, T.; Ramuhalli, Pradeep; Dass, Sarat
2011-06-30
Flaw profile characterization from NDE measurements is a typical inverse problem. A novel transformation of this inverse problem into a tracking problem, and subsequent application of a sequential Monte Carlo method called particle filtering, has been proposed by the authors in an earlier publication [1]. In this study, the problem of flaw characterization from multi-sensor data is considered. The NDE inverse problem is posed as a statistical inverse problem and particle filtering is modified to handle data from multiple measurement modes. The measurement modes are assumed to be independent of each other with principal component analysis (PCA) used to legitimize the assumption of independence. The proposed particle filter based data fusion algorithm is applied to experimental NDE data to investigate its feasibility.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291
Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications
Moccia, Antonio
2014-01-01
Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291
NASA Astrophysics Data System (ADS)
Raitoharju, Matti; Nurminen, Henri; Piché, Robert
2015-12-01
Indoor positioning based on wireless local area network (WLAN) signals is often enhanced using pedestrian dead reckoning (PDR) based on an inertial measurement unit. The state evolution model in PDR is usually nonlinear. We present a new linear state evolution model for PDR. In simulated-data and real-data tests of tightly coupled WLAN-PDR positioning, the positioning accuracy with this linear model is better than with the traditional models when the initial heading is not known, which is a common situation. The proposed method is computationally light and is also suitable for smoothing. Furthermore, we present modifications to WLAN positioning based on Gaussian coverage areas and show how a Kalman filter using the proposed model can be used for integrity monitoring and (re)initialization of a particle filter.
On the application of optimal wavelet filter banks for ECG signal classification
NASA Astrophysics Data System (ADS)
Hadjiloucas, S.; Jannah, N.; Hwang, F.; Galvão, R. K. H.
2014-03-01
This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.
Evaluation of filter media for particle number, surface area and mass penetrations.
Li, Lin; Zuo, Zhili; Japuntich, Daniel A; Pui, David Y H
2012-07-01
The National Institute for Occupational Safety and Health (NIOSH) developed a standard for respirator certification under 42 CFR Part 84, using a TSI 8130 automated filter tester with photometers. A recent study showed that photometric detection methods may not be sensitive for measuring engineered nanoparticles. Present NIOSH standards for penetration measurement are mass-based; however, the threshold limit value/permissible exposure limit for an engineered nanoparticle worker exposure is not yet clear. There is lack of standardized filter test development for engineered nanoparticles, and development of a simple nanoparticle filter test is indicated. To better understand the filter performance against engineered nanoparticles and correlations among different tests, initial penetration levels of one fiberglass and two electret filter media were measured using a series of polydisperse and monodisperse aerosol test methods at two different laboratories (University of Minnesota Particle Technology Laboratory and 3M Company). Monodisperse aerosol penetrations were measured by a TSI 8160 using NaCl particles from 20 to 300 nm. Particle penetration curves and overall penetrations were measured by scanning mobility particle sizer (SMPS), condensation particle counter (CPC), nanoparticle surface area monitor (NSAM), and TSI 8130 at two face velocities and three layer thicknesses. Results showed that reproducible, comparable filtration data were achieved between two laboratories, with proper control of test conditions and calibration procedures. For particle penetration curves, the experimental results of monodisperse testing agreed well with polydisperse SMPS measurements. The most penetrating particle sizes (MPPSs) of electret and fiberglass filter media were ~50 and 160 nm, respectively. For overall penetrations, the CPC and NSAM results of polydisperse aerosols were close to the penetration at the corresponding median particle sizes. For each filter type, power-law correlations between the penetrations measured by different instruments show that the NIOSH TSI 8130 test may be used to predict penetrations at the MPPS as well as the CPC and NSAM results with polydisperse aerosols. It is recommended to use dry air (<20% RH) as makeup air in the test system to prevent sodium chloride particle deliquescing and minimizing the challenge particle dielectric constant and to use an adequate neutralizer to fully neutralize the polydisperse challenge aerosol. For a simple nanoparticle penetration test, it is recommended to use a polydisperse aerosol challenge with a geometric mean of ~50 nm with the CPC or the NSAM as detectors. PMID:22752097
Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah
2015-01-01
The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches—Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims. PMID:25978493
Particle separation and collection using an optical chromatographic filter
NASA Astrophysics Data System (ADS)
Hart, Sean J.; Terray, Alex V.; Arnold, Jonathan
2007-10-01
An optofluidic design has been used to completely separate and collect fractions of an injected mixture of colloidal particles. A three-dimensional glass microfluidic device was constructed such that the fluid was directed though a 50-?m-diameter channel. A laser was introduced opposite the flow and its spot size adjusted to completely fill the channel. Thus, for a given laser power and flow rate, certain particles are completely retained while others pass through unhindered. Separation efficiencies in excess of 99% have been attained for a mixture of polymer and silica beads.
Particle Count Statistics Applied to the Penetration of a Filter Challenged with Nanoparticles
O’Shaughnessy, Patrick T.; Schmoll, Linda H.
2014-01-01
Statistical confidence in a single measure of filter penetration (P) is dependent on the low number of particle counts made downstream of the filter. This paper discusses methods for determining an upper confidence limit (UCL) for a single measure of penetration. The magnitude of the UCL was then compared to the P value, UCL ? 2P, as a penetration acceptance criterion (PAC). This statistical method was applied to penetration trials involving an N95 filtering facepiece respirator challenged with sodium chloride and four engineered nanoparticles: titanium dioxide, iron oxide, silicon dioxide and single-walled carbon nanotubes. Ten trials were performed for each particle type with the aim of determining the most penetrating particle size (MPPS) and the maximum penetration, Pmax. The PAC was applied to the size channel containing the MPPS. With those P values that met the PAC for a given set of trials, an average Pmax and MPPS was computed together with corresponding standard deviations. Because the size distribution of the silicon dioxide aerosol was shifted towards larger particles relative to the MPPS, none of the ten trials satisfied the PAC for that aerosol. The remaining four particle types resulted in at least 4 trials meeting the criterion. MPPS values ranged from 35 – 53 nm with average Pmax values varying from 4.0% for titanium dioxide to 7.0% for iron oxide. The use of the penetration acceptance criterion is suggested for determining the reliability of penetration measurements obtained to determine filter Pmax and MPPS. PMID:24678138
Particle Clogging in Filter Media of Embankment Dams: A Numerical and Experimental Study
NASA Astrophysics Data System (ADS)
Antoun, T.; Kanarska, Y.; Ezzedine, S. M.; Lomov, I.; Glascoe, L. G.; Smith, J.; Hall, R. L.; Woodson, S. C.
2013-12-01
The safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique which enforces the correct in-domain computational boundary conditions inside and on the boundary of the particles. The numerical code is validated to experiments conducted at the US Army Corps of Engineering and Research Development Center (ERDC). These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was sponsored by the Department of Homeland Security (DHS), Science and Technology Directorate, Homeland Security Advanced Research Projects Agency (HSARPA).
Ares-I Bending Filter Design using a Constrained Optimization Approach
NASA Technical Reports Server (NTRS)
Hall, Charles; Jang, Jiann-Woei; Hall, Robert; Bedrossian, Nazareth
2008-01-01
The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output is required to ensure adequate stable response to guidance commands while minimizing trajectory deviations. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares-I time-varying dynamics and control system can be frozen over a short period of time, the bending filters are designed to stabilize all the selected frozen-time launch control systems in the presence of parameter uncertainty. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constrains minimizes performance degradation caused by the addition of the bending filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The bending filter designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC MAVERIC 6DOF nonlinear time domain simulation.
Removal of virus to protozoan sized particles in point-of-use ceramic water filters.
Bielefeldt, Angela R; Kowalski, Kate; Schilling, Cherylynn; Schreier, Simon; Kohler, Amanda; Scott Summers, R
2010-03-01
The particle removal performance of point-of-use ceramic water filters (CWFs) was characterized in the size range of 0.02-100 microm using carboxylate-coated polystyrene fluorescent microspheres, natural particles and clay. Particles were spiked into dechlorinated tap water, and three successive water batches treated in each of six different CWFs. Particle removal generally increased with increasing size. The removal of virus-sized 0.02 and 0.1 microm spheres were highly variable between the six filters, ranging from 63 to 99.6%. For the 0.5 microm spheres removal was less variable and in the range of 95.1-99.6%, while for the 1, 2, 4.5, and 10 microm spheres removal was >99.6%. Recoating four of the CWFs with colloidal silver solution improved removal of the 0.02 microm spheres, but had no significant effects on the other particle sizes. Log removals of 1.8-3.2 were found for natural turbidity and spiked kaolin clay particles; however, particles as large as 95 microm were detected in filtered water. PMID:19926110
Blended particle methods with adaptive subspaces for filtering turbulent dynamical systems
NASA Astrophysics Data System (ADS)
Qi, Di; Majda, Andrew J.
2015-04-01
It is a major challenge throughout science and engineering to improve uncertain model predictions by utilizing noisy data sets from nature. Hybrid methods combining the advantages of traditional particle filters and the Kalman filter offer a promising direction for filtering or data assimilation in high dimensional turbulent dynamical systems. In this paper, blended particle filtering methods that exploit the physical structure of turbulent dynamical systems are developed. Non-Gaussian features of the dynamical system are captured adaptively in an evolving-in-time low dimensional subspace through particle methods, while at the same time statistics in the remaining portion of the phase space are amended by conditional Gaussian mixtures interacting with the particles. The importance of both using the adaptively evolving subspace and introducing conditional Gaussian statistics in the orthogonal part is illustrated here by simple examples. For practical implementation of the algorithms, finding the most probable distributions that characterize the statistics in the phase space as well as effective resampling strategies is discussed to handle realizability and stability issues. To test the performance of the blended algorithms, the forty dimensional Lorenz 96 system is utilized with a five dimensional subspace to run particles. The filters are tested extensively in various turbulent regimes with distinct statistics and with changing observation time frequency and both dense and sparse spatial observations. In real applications perfect dynamical models are always inaccessible considering the complexities in both modeling and computation of high dimensional turbulent system. The effects of model errors from imperfect modeling of the systems are also checked for these methods. The blended methods show uniformly high skill in both capturing non-Gaussian statistics and achieving accurate filtering results in various dynamical regimes with and without model errors.
Georg Jäger; Ulrich Hohenester
2013-09-07
We theoretically investigate protocols based on optimal control theory (OCT) for manipulating Bose-Einstein condensates in magnetic microtraps, using the framework of the Gross-Pitaevskii equation. In our approach we explicitly account for filter functions that distort the computed optimal control, a situation inherent to many experimental OCT implementations. We apply our scheme to the shakeup process of a condensate from the ground to the first excited state, following a recent experimental and theoretical study, and demonstrate that the fidelity of OCT protocols is not significantly deteriorated by typical filters.
Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling
Safak, Erdal
1989-01-01
This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.
Optimization of magnetic switches for single particle and cell transport
Abedini-Nassab, Roozbeh; Yellen, Benjamin B.; Murdoch, David M.; Kim, CheolGi
2014-06-28
The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.
NASA Astrophysics Data System (ADS)
Glascoe, L. G.; Ezzedine, S. M.; Kanarska, Y.; Lomov, I. N.; Antoun, T.; Smith, J.; Hall, R.; Woodson, S.
2014-12-01
Understanding the flow of fines, particulate sorting in porous media and fractured media during sediment transport is significant for industrial, environmental, geotechnical and petroleum technologies to name a few. For example, the safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique. The numerical code is validated to experiments conducted at the USACE and ERDC. These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. We have extended these results to more realistic heterogeneous population particulates for sediment transport. This work performed under the auspices of the US DOE by LLNL under Contract DE-AC52-07NA27344 and was sponsored by the Department of Homeland Security, Science and Technology Directorate, Homeland Security Advanced Research Projects Agency.
An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images
Coupé, Pierrick; Yger, Pierre; Prima, Sylvain; Hellier, Pierre; Kervrann, Charles; Barillot, Christian
2008-01-01
A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3D optimized blockwise version of the Non Local (NL) means filter [1]. The NL-means filter uses the redundancy of information in the image under study to remove the noise. The performance of the NL-means filter has been already demonstrated for 2D images, but reducing the computational burden is a critical aspect to extend the method to 3D images. To overcome this problem, we propose improvements to reduce the computational complexity. These different improvements allow to drastically divide the computational time while preserving the performances of the NL-means filter. A fully-automated and optimized version of the NL-means filter is then presented. Our contributions to the NL-means filter are: (a) an automatic tuning of the smoothing parameter, (b) a selection of the most relevant voxels, (c) a blockwise implementation and (d) a parallelized computation. Quantitative validation was carried out on synthetic datasets generated with BrainWeb [2]. The results show that our optimized NL-means filter outperforms the classical implementation of the NL-means filter, as well as two other classical denoising methods (Anisotropic Diffusion [3] and Total Variation minimization process [4]) in terms of accuracy (measured by the Peak Signal to Noise Ratio) with low computation time. Finally, qualitative results on real data are presented. PMID:18390341
High-efficiency particulate air filter test stand and aerosol generator for particle loading studies
NASA Astrophysics Data System (ADS)
Arunkumar, R.; Hogancamp, Kristina U.; Parsons, Michael S.; Rogers, Donna M.; Norton, Olin P.; Nagel, Brian A.; Alderman, Steven L.; Waggoner, Charles A.
2007-08-01
This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 30×30×29cm3 nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5to12standardm3/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150°C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7standardm3/min, high mass concentrations (˜25mg/m3) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions.
Integration of GPS precise point positioning and MEMS-based INS using unscented particle filter.
Abd Rabbou, Mahmoud; El-Rabbany, Ahmed
2015-01-01
Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) integrated system involves nonlinear motion state and measurement models. However, the extended Kalman filter (EKF) is commonly used as the estimation filter, which might lead to solution divergence. This is usually encountered during GPS outages, when low-cost micro-electro-mechanical sensors (MEMS) inertial sensors are used. To enhance the navigation system performance, alternatives to the standard EKF should be considered. Particle filtering (PF) is commonly considered as a nonlinear estimation technique to accommodate severe MEMS inertial sensor biases and noise behavior. However, the computation burden of PF limits its use. In this study, an improved version of PF, the unscented particle filter (UPF), is utilized, which combines the unscented Kalman filter (UKF) and PF for the integration of GPS precise point positioning and MEMS-based inertial systems. The proposed filter is examined and compared with traditional estimation filters, namely EKF, UKF and PF. Tightly coupled mechanization is adopted, which is developed in the raw GPS and INS measurement domain. Un-differenced ionosphere-free linear combinations of pseudorange and carrier-phase measurements are used for PPP. The performance of the UPF is analyzed using a real test scenario in downtown Kingston, Ontario. It is shown that the use of UPF reduces the number of samples needed to produce an accurate solution, in comparison with the traditional PF, which in turn reduces the processing time. In addition, UPF enhances the positioning accuracy by up to 15% during GPS outages, in comparison with EKF. However, all filters produce comparable results when the GPS measurement updates are available. PMID:25815446
Integration of GPS Precise Point Positioning and MEMS-Based INS Using Unscented Particle Filter
Abd Rabbou, Mahmoud; El-Rabbany, Ahmed
2015-01-01
Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) integrated system involves nonlinear motion state and measurement models. However, the extended Kalman filter (EKF) is commonly used as the estimation filter, which might lead to solution divergence. This is usually encountered during GPS outages, when low-cost micro-electro-mechanical sensors (MEMS) inertial sensors are used. To enhance the navigation system performance, alternatives to the standard EKF should be considered. Particle filtering (PF) is commonly considered as a nonlinear estimation technique to accommodate severe MEMS inertial sensor biases and noise behavior. However, the computation burden of PF limits its use. In this study, an improved version of PF, the unscented particle filter (UPF), is utilized, which combines the unscented Kalman filter (UKF) and PF for the integration of GPS precise point positioning and MEMS-based inertial systems. The proposed filter is examined and compared with traditional estimation filters, namely EKF, UKF and PF. Tightly coupled mechanization is adopted, which is developed in the raw GPS and INS measurement domain. Un-differenced ionosphere-free linear combinations of pseudorange and carrier-phase measurements are used for PPP. The performance of the UPF is analyzed using a real test scenario in downtown Kingston, Ontario. It is shown that the use of UPF reduces the number of samples needed to produce an accurate solution, in comparison with the traditional PF, which in turn reduces the processing time. In addition, UPF enhances the positioning accuracy by up to 15% during GPS outages, in comparison with EKF. However, all filters produce comparable results when the GPS measurement updates are available. PMID:25815446
PARTICLE FILTERING AND CRAM ER-RAO LOWER BOUND FOR UNDERWATER NAVIGATION
Gustafsson, Fredrik
PARTICLE FILTERING AND CRAM Â´ER-RAO LOWER BOUND FOR UNDERWATER NAVIGATION Rickard Karlsson, Fredrik, Sweden E-mail: tobias.karlsson.lith@dynamics.saab.se ABSTRACT We have studied a sea navigation method is interpreted in terms of the iner- tial navigation system (INS) error, the sensor accuracy and the terrain map
Tracking Human Body Parts Using Particle Filters Constrained by Human Biomechanics
Nebel, Jean-Christophe
Tracking Human Body Parts Using Particle Filters Constrained by Human Biomechanics J. Mart´inez1, J of human body parts is introduced. The presented approach demonstrates the feasibility of recover- ing human poses with data from a single uncalibrated camera using a limb tracking system based on a 2D
Tracking Human Position and Lower Body Parts Using Kalman and Particle Filters Constrained by
Nebel, Jean-Christophe
1 Tracking Human Position and Lower Body Parts Using Kalman and Particle Filters Constrained for visual tracking of human body parts is introduced. The presented approach demonstrates the feasibility of recovering human poses with data from a single uncalibrated camera using a limb tracking system based on a 2D
Teschner, Matthias
. A typical example from factory floors is that of a mobile manipulation robot that has to pick up objectsOn the Position Accuracy of Mobile Robot Localization based on Particle Filters Combined with Scan Burgard Abstract-- Many applications in mobile robotics and espe- cially industrial applications require
An assessment of particle filtering methods and nudging for climate state reconstructions
NASA Astrophysics Data System (ADS)
Dubinkina, S.; Goosse, H.
2013-05-01
Using the climate model of intermediate complexity LOVECLIM in an idealised framework, we assess three data-assimilation methods for reconstructing the climate state. The methods are a nudging, a particle filter with sequential importance resampling, and a nudging proposal particle filter and the test case corresponds to the climate of the high latitudes of the Southern Hemisphere during the past 150 yr. The data-assimilation methods constrain the model by pseudo-observations of surface air temperature anomalies obtained from the same model, but different initial conditions. All three data-assimilation methods provide with good estimations of surface air temperature and of sea ice concentration, with the nudging proposal particle filter obtaining the highest correlations with the pseudo-observations. When reconstructing variables that are not directly linked to the pseudo-observations such as atmospheric circulation and sea surface salinity, the particle filters have equivalent performance and their correlations are smaller than for surface air temperature reconstructions but still satisfactory for many applications. The nudging, on the contrary, obtains sea surface salinity patterns that are opposite to the pseudo-observations, which is due to a spurious impact of the nudging on vertical exchanges in the ocean.
Analytical model for particle migration within base soil-filter system
Indraratna, B.; Vafai, F.
1997-02-01
Cracking of impervious dam cores can occur due to differential settlement, construction deficiencies, or hydraulic fracturing. When leakage occurs through a cracked core, leakage channels may erode. The studies referred to in this paper have mostly found that for gradients typical of dams, erosion in cracks or other leakage channels usually occurs quickly and clogs the filter in the area of the crack, which is beneficial in practice. This study highlights a mathematical (analytical) model simulating the filtration phenomenon applicable to a base soil-filter system, incorporating the hydraulic conditions and the relevant material properties such as porosity, density, friction angle, and the shape and distribution of particles. The model is founded on the concept of critical hydraulic gradient derived from limit equilibrium considerations, where the migration of particles is assumed to occur under applied hydraulic gradients exceeding this critical value. The rate of particle erosion, and hence, the filter effectiveness is quantified on the basis of mass and momentum conservation theories. By dividing the base soil and filter domains into discrete elements, the model is capable of predicting the time-dependent particle gradation and permeability of each element, thereby the amount of material eroded from or retained within the system. Laboratory tests conducted on a fine base material verify the validity of the model. The model predictions are also compared with the available empirical recommendations, including the conventional grading ratios.
A Particle Filter for Monocular Vision-Aided Odometry Teddy Yap, Jr
Shelton, Christian R.
such estimates, in most cases a robot fuses measurements from multiple onboard sensors. Typically features in the environment, taken by a camera mounted on the robot. Our key contribution is a novel-- We propose a particle filter-based algorithm for monocular vision-aided odometry for mobile robot
X-RAY FLUORESCENCE ANALYSIS OF FILTER-COLLECTED AEROSOL PARTICLES
X-ray fluorescence (XRF) has become an effective technique for determining the elemental content of aerosol samples. For quantitative analysis, the aerosol particles must be collected as uniform deposits on the surface of Teflon membrane filters. An energy dispersive XRF spectrom...
An assessment of climate state reconstructions obtained using particle filtering methods
NASA Astrophysics Data System (ADS)
Dubinkina, Svetlana; Goosse, Hugues
2013-04-01
In an idealized framework, we assess reconstructions of the climate state of the southern hemisphere during the past 150 years using the climate model of intermediate complexity LOVECLIM and three data-assimilation methods: a nudging, a particle filter with sequential importance resampling, and an extremely efficient particle filter. The methods constrain the model by pseudo-observations of surface air temperature anomalies obtained from a twin experiment using the same model but different initial conditions. The net of the pseudo-observations is chosen to be either dense (when the pseudo-observations are given at every grid cell of the model) or sparse (when the pseudo-observations are given at the same locations as the dataset of instrumental surface temperature records HADCRUT3). All three data-assimilation methods provide with good estimations of surface air temperature and of sea ice concentration, with the extremely efficient particle filter having the best performance. When reconstructing variables that are not directly linked to the pseudo-observations of surface air temperature as atmospheric circulation and sea surface salinity, the performance of the particle filters is weaker but still satisfactory for many applications. Sea surface salinity reconstructed by the nudging, however, exhibits a patterns opposite to the pseudo-observations, which is due to a spurious impact of the nudging on the ocean mixing.
An assessment of climate state reconstructions obtained using particle filtering methods
NASA Astrophysics Data System (ADS)
Dubinkina, S.; Goosse, H.
2013-01-01
In an idealized framework, we assess reconstructions of the climate state of the Southern Hemisphere during the past 150 yr using the climate model of intermediate complexity LOVECLIM and three data-assimilation methods: a nudging, a particle filter with sequential importance resampling, and an extremely efficient particle filter. The methods constrain the model by pseudo-observations of surface air temperature anomalies obtained from a twin experiment using the same model but different initial conditions. The net of the pseudo-observations is chosen to be either dense (when the pseudo-observations are given at every grid cell of the model) or sparse (when the pseudo-observations are given at the same locations as the dataset of instrumental surface temperature records HADCRUT3). All three data-assimilation methods provide with good estimations of surface air temperature and of sea ice concentration, with the extremely efficient particle filter having the best performance. When reconstructing variables that are not directly linked to the pseudo-observations of surface air temperature as atmospheric circulation and sea surface salinity, the performance of the particle filters is weaker but still satisfactory for many applications. Sea surface salinity reconstructed by the nudging, however, exhibits a patterns opposite to the pseudo-observations, which is due to a spurious impact of the nudging on the ocean mixing.
Cluster Based Sensor Scheduling in a Target Tracking Application with Particle Filtering
Bayazit, Ulug
Cluster Based Sensor Scheduling in a Target Tracking Application with Particle Filtering Özgür-Electronics Engineering Istanbul University Istanbul, Turkey hcirpan@istanbul.edu.tr Abstract-- In multi-sensor applications management of sensors is necessary for the classification of data they produce
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS 1 A Beamformer-Particle Filter Framework for
Bouaynaya, Nidhal
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS 1 A Beamformer-Particle Filter Framework. Mihaylova (corresponding author) is with the Department of Automatic Control and Systems Engineering, Univer is with the University of South Australia, Australia (e-mail: lakhmi.jain@unisa.edu.au) the ill-posed inverse problem
Ravindran, Binoy
with limited tracking performance loss. In addition, we expand Sleep Scheduling to Multiple Target Tracking Bayesian estimation methods, when target tracking is considered as a dynamic state estimation problemEnergy Efficient Target Tracking in Wireless Sensor Networks: Sleep Scheduling, Particle Filtering
Nakano, Shin'ya
computer Shin'ya Nakano The Institute of Statistical Mathematics Tachikawa, Tokyo, Japan. shiny@ism.ac.jp Abstract A practical way to implement the parti- cle filter (PF) on a massively parallel computer is dis to be computation- ally expensive in applying to high-dimensional problems because a enormous number of particles
LS-N-IPS: an Improvement of Particle Filters by Means of Local Search
Szepesvari, Csaba
LS-N-IPS: an Improvement of Particle Filters by Means of Local Search P#19;eter Torma Eotvos Lor algorithm in the small sample size limit and when the observations are \\reliable". The algorithm called LS in a local search procedure that utilizes the most recent observation. The uniform stability of LS
Multi-Robot Cooperative Object Tracking Based on Particle Filters Aamir Ahmad Pedro Lima
1 Multi-Robot Cooperative Object Tracking Based on Particle Filters Aamir Ahmad Pedro Lima Institute for Systems and Robotics, Instituto Superior T´ecnico, Lisboa, Portugal Abstract-- This article presents a cooperative approach for tracking a moving object by a team of mobile robots equipped
Improved Particle Swarm Optimization for Global Optimization of Unimodal and Multimodal Functions
NASA Astrophysics Data System (ADS)
Basu, Mousumi
2015-07-01
Particle swarm optimization (PSO) performs well for small dimensional and less complicated problems but fails to locate global minima for complex multi-minima functions. This paper proposes an improved particle swarm optimization (IPSO) which introduces Gaussian random variables in velocity term. This improves search efficiency and guarantees a high probability of obtaining the global optimum without significantly impairing the speed of convergence and the simplicity of the structure of particle swarm optimization. The algorithm is experimentally validated on 17 benchmark functions and the results demonstrate good performance of the IPSO in solving unimodal and multimodal problems. Its high performance is verified by comparing with two popular PSO variants.
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.
LeGland, François
for Positioning, Navigation, and Tracking Fredrik Gustafsson, Fredrik Gunnarsson, Niclas Bergman, Urban Forssell, navigation, and tracking problems using particle filters (sequential Monte Carlo methods) is developed navigation (as GPS) but with higher integrity. Based on simulations, we also argue how the particle filter
Optimal filtering for spike sorting of multi-site electrode recordings.
Vollgraf, Roland; Munk, Matthias; Obermayer, Klaus
2005-03-01
We derive an optimal linear filter, to reduce the distortions of the peak amplitudes of action potentials in extracellular multitrode recordings, which are due to background activity and overlapping spikes. This filter is being learned very efficiently from the raw recordings in an unsupervised manner and responds to the average waveform with an impulse of minimal width. The average waveform does not have to be known in advance, but is learned together with the optimal filter. The peak amplitude of a filtered waveform is a more reliable estimate for the amplitude of an action potential than the peak of the biphasic waveform and can improve the accuracy of the event detection and clustering procedures. We demonstrate a spike-sorting application, in which events are detected using the Mahalanobis distance in the N-dimensional space of filtered recordings as a distance measure, and the event amplitudes of the filtered recordings are clustered to assign events to individual units. This method is fast and robust, and we show its performance by applying it to real tetrode recordings of spontaneous activity in the visual cortex of an anaesthetized cat and to realistic artificial data derived therefrom. PMID:16350435
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Alderman, Steven L; Parsons, Michael S; Hogancamp, Kristina U; Waggoner, Charles A
2008-11-01
High-efficiency particulate air (HEPA) filters are widely used to control particulate matter emissions from processes that involve management or treatment of radioactive materials. Section FC of the American Society of Mechanical Engineers AG-1 Code on Nuclear Air and Gas Treatment currently restricts media velocity to a maximum of 2.5 cm/sec in any application where this standard is invoked. There is some desire to eliminate or increase this media velocity limit. A concern is that increasing media velocity will result in higher emissions of ultrafine particles; thus, it is unlikely that higher media velocities will be allowed without data to demonstrate the effect of media velocity on removal of ultrafine particles. In this study, the performance of nuclear grade HEPA filters, with respect to filter efficiency and most penetrating particle size, was evaluated as a function of media velocity. Deep-pleat nuclear grade HEPA filters (31 cm x 31 cm x 29 cm) were evaluated at media velocities ranging from 2.0 to 4.5 cm/sec using a potassium chloride aerosol challenge having a particle size distribution centered near the HEPA filter most penetrating particle size. Filters were challenged under two distinct mass loading rate regimes through the use of or exclusion of a 3 microm aerodynamic diameter cut point cyclone. Filter efficiency and most penetrating particle size measurements were made throughout the duration of filter testing. Filter efficiency measured at the onset of aerosol challenge was noted to decrease with increasing media velocity, with values ranging from 99.999 to 99.977%. The filter most penetrating particle size recorded at the onset of testing was noted to decrease slightly as media velocity was increased and was typically in the range of 110-130 nm. Although additional testing is needed, these findings indicate that filters operating at media velocities up to 4.5 cm/sec will meet or exceed current filter efficiency requirements. Additionally, increased emission of ultrafine particles is seemingly negligible. PMID:18726819
NASA Astrophysics Data System (ADS)
Morzfeld, M.; Atkins, E.; Chorin, A. J.
2011-12-01
The task in data assimilation is to identify the state of a system from an uncertain model supplemented by a stream of incomplete and noisy data. The model is typically given in form of a discretization of an Ito stochastic differential equation (SDE), x(n+1) = R(x(n))+ G W(n), where x is an m-dimensional vector and n=0,1,2,.... The m-dimensional vector function R and the m x m matrix G depend on the SDE as well as on the discretization scheme, and W is an m-dimensional vector whose elements are independent standard normal variates. The data are y(n) = h(x(n))+QV(n) where h is a k-dimensional vector function, Q is a k x k matrix and V is a vector whose components are independent standard normal variates. One can use statistics of the conditional probability density (pdf) of the state given the observations, p(n+1)=p(x(n+1)|y(1), ... , y(n+1)), to identify the state x(n+1). Particle filters approximate p(n+1) by sequential Monte Carlo and rely on the recursive formulation of the target pdf, p(n+1)?p(x(n+1)|x(n)) p(y(n+1)|x(n+1)). The pdf p(x(n+1)|x(n)) can be read off of the model equations to be a Gaussian with mean R(x(n)) and covariance matrix ? = GG^T, where the T denotes a transposed; the pdf p(y(n+1)|x(n+1)) is a Gaussian with mean h(x(n+1)) and covariance QQ^T. In a sampling-importance-resampling (SIR) filter one samples new values for the particles from a prior pdf and then one weighs these samples with weights determined by the observations, to yield an approximation to p(n+1). Such weighting schemes often yield small weights for many of the particles. Implicit particle filtering overcomes this problem by using the observations to generate the particles, thus focusing attention on regions of large probability. A suitable algebraic equation that depends on the model and the observations is constructed for each particle, and its solution yields high probability samples of p(n+1). In the current formulation of the implicit particle filter, the state covariance matrix ? is assumed to be non-singular. In the present work we consider the case where the covariance ? is singular. This happens in particular when the noise is spatially smooth and can be represented by a small number of Fourier coefficients, as is often the case in geophysical applications. We derive an implicit filter for this problem and show that it is very efficient, because the filter operates in a space whose dimension is the rank of ?, rather than the full model dimension. We compare the implicit filter to SIR, to the Ensemble Kalman Filter and to variational methods, and also study how information from data is propagated from observed to unobserved variables. We illustrate the theory on two coupled nonlinear PDE's in one space dimension that have been used as a test-bed for geomagnetic data assimilation. We observe that the implicit filter gives good results with few (2-10) particles, while SIR requires thousands of particles for similar accuracy. We also find lower limits to the accuracy of the filter's reconstruction as a function of data availability.
Boundary filters for vector particles passing parity breaking domains
Kolevatov, S. S.; Andrianov, A. A.
2014-07-23
The electrodynamics supplemented with a Lorenz and CPT invariance violating Chern-Simons (CS) action (Carrol-Field-Jackiw electrodynamics) is studied when the parity-odd medium is bounded by a hyperplane separating it from the vacuum. The solutions in both half-spaces are carefully discussed and for space-like boundary stitched on the boundary with help of the Bogolubov transformations. The presence of two different Fock vacua is shown. The passage of photons and massive vector mesons through a boundary between the CS medium and the vacuum of conventional Maxwell electrodynamics is investigated. Effects of reflection from a boundary (up to the total one) are revealed when vector particles escape to vacuum and income from vacuum passing the boundary.
Force Error Optimization in Smooth Particle Hydrodynamics
R. Capuzzo-Dolcetta; R. Di Lisio
1996-11-22
We discuss capability of Smooth Particle Hydrodynamics to represent adequately the dynamics of self-gravitating systems, in particular for what regards the quality of approximation of force fields in the motion equations. When cubic spline kernels are used, we find that a good estimate of the pressure field cannot be obtained in non uniform situations using the commonly adopted scheme of adapting the kernel sizes to include a fixed number of neighbours. We find that a fixed number of neighbours gives the best approximation of just the intensity of the force field, while the determination of the direction of the force requires a number of neighbours which strongly depends on the particle position.
Optimization of high speed pipelining in FPGA-based FIR filter design using genetic algorithm
NASA Astrophysics Data System (ADS)
Meyer-Baese, Uwe; Botella, Guillermo; Romero, David E. T.; Kumm, Martin
2012-06-01
This paper compares FPGA-based full pipelined multiplierless FIR filter design options. Comparison of Distributed Arithmetic (DA), Common Sub-Expression (CSE) sharing and n-dimensional Reduced Adder Graph (RAG-n) multiplierless filter design methods in term of size, speed, and A*T product are provided. Since DA designs are table-based and CSE/RAG-n designs are adder-based, FPGA synthesis design data are used for a realistic comparison. Superior results of a genetic algorithm based optimization of pipeline registers and non-output fundamental coefficients are shown. FIR filters (posted as open source by Kastner et al.) for filters in the length from 6 to 151 coefficients are used.
Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin
Claridge, Ela
Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin Stephen J the error associated with histological parameters characterizing normal skin tissue. These parameters can be recovered from digital images of the skin using a physics-based model of skin coloration. The relationship
Gajic, Zoran
)C)t h . Determine the closed-loop system eigenvalues. c) Design an observer for this carProject # 3 -- 332: 406 Control System Design Optimal Control and Kalman Filtering for a Passenger Car Project due Thursday April 1, 2004 A mathematical model of a passenger car is given by (Salman
Gajic, Zoran
) Design an observer for this car with the observer poles (eigenvalues) being much faster than the systemProject # 3 --- 332: 406 Control System Design Optimal Control and Kalman Filtering for a Passenger Car Project due Thursday April 1, 2004 A mathematical model of a passenger car is given by (Salman
DMT Bit Rate Maximization With Optimal Time Domain Equalizer Filter Bank Architecture
Evans, Brian L.
DMT Bit Rate Maximization With Optimal Time Domain Equalizer Filter Bank Architecture Milos-tone (DMT) is a multicarrier modula- tion method in which the available bandwidth of a com- munication create nearly orthogonal subchannels. DMT has been standardized in [1, 2, 3, 4]. A similar multi- carrier
Gaussian mixture sigma-point particle filter for optical indoor navigation system
NASA Astrophysics Data System (ADS)
Zhang, Weizhi; Gu, Wenjun; Chen, Chunyi; Chowdhury, M. I. S.; Kavehrad, Mohsen
2013-12-01
With the fast growing and popularization of smart computing devices, there is a rise in demand for accurate and reliable indoor positioning. Recently, systems using visible light communications (VLC) technology have been considered as candidates for indoor positioning applications. A number of researchers have reported that VLC-based positioning systems could achieve position estimation accuracy in the order of centimeter. This paper proposes an Indoors navigation environment, based on visible light communications (VLC) technology. Light-emitting-diodes (LEDs), which are essentially semiconductor devices, can be easily modulated and used as transmitters within the proposed system. Positioning is realized by collecting received-signal-strength (RSS) information on the receiver side, following which least square estimation is performed to obtain the receiver position. To enable tracking of user's trajectory and reduce the effect of wild values in raw measurements, different filters are employed. In this paper, by computer simulations we have shown that Gaussian mixture Sigma-point particle filter (GM-SPPF) outperforms other filters such as basic Kalman filter and sequential importance-resampling particle filter (SIR-PF), at a reasonable computational cost.
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.
Estimation of the Dynamic States of Synchronous Machines Using an Extended Particle Filter
Zhou, Ning; Meng, Da; Lu, Shuai
2013-11-11
In this paper, an extended particle filter (PF) is proposed to estimate the dynamic states of a synchronous machine using phasor measurement unit (PMU) data. A PF propagates the mean and covariance of states via Monte Carlo simulation, is easy to implement, and can be directly applied to a non-linear system with non-Gaussian noise. The extended PF modifies a basic PF to improve robustness. Using Monte Carlo simulations with practical noise and model uncertainty considerations, the extended PF’s performance is evaluated and compared with the basic PF and an extended Kalman filter (EKF). The extended PF results showed high accuracy and robustness against measurement and model noise.
Continuous collection of soluble atmospheric particles with a wetted hydrophilic filter.
Takeuchi, Masaki; Ullah, S M Rahmat; Dasgupta, Purnendu K; Collins, Donald R; Williams, Allen
2005-12-15
Approximately one-third of the area (14-mm diameter of a 25-mm diameter) of a 5-microm uniform pore size polycarbonate filter is continuously wetted by a 0.25 mL/min water mist. The water forms a continuous thin film on the filter and percolates through it. The flowing water substantially reduces the effective pore size of the filter. At the operational air sampling flow rate of 1.5 standard liters per minute, such a particle collector (PC) efficiently captures particles down to very small size. As determined by fluorescein-tagged NaCl aerosol generated by a vibrating orifice aerosol generator, the capture efficiency was 97.7+% for particle aerodynamic diameters ranging from 0.28 to 3.88 microm. Further, 55.3 and 80.3% of 25- and 100-nm (NH4)2SO4 particles generated by size classification with a differential mobility analyzer were respectively collected by the device. The PC is integrally coupled with a liquid collection reservoir. The liquid effluent from the wetted filter collector, bearing the soluble components of the aerosol, can be continuously collected or periodically withdrawn. The latter strategy permits the use of a robust syringe pump for the purpose. Coupled with a PM2.5 cyclone inlet and a membrane-based parallel plate denuder at the front end and an ion chromatograph at the back end, the PC readily operated for at least 4-week periods without filter replacement or any other maintenance. PMID:16351153
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
The optimization of continually-operating rotary filters for vacuum and hyperbaric filtration
Nicolaou, I.
1995-12-31
This paper demonstrates the method of approach for the optimization of such filters. This does not only incorporate the proven and simple recently developed equations for the calculation of the solids throughput, the residual moisture and gas throughput, but also introduces equations for the calculation of the specific product costs for both vacuum and hyperbaric filtration under consideration of the influences exerted by the filter compressor and a possible thermal drying. An easy-to-use method is offered which incorporates a special graphic representation concerning the plotting of the target parameters in dependence of the pressure difference with which the optimal process pressure difference can be established, independent of the specific cake permeability or product fineness. Finally, the optimization method is exemplarily demonstrated for a concrete application.
Decoupled Control Strategy of Grid Interactive Inverter System with Optimal LCL Filter Design
NASA Astrophysics Data System (ADS)
Babu, B. Chitti; Anurag, Anup; Sowmya, Tontepu; Marandi, Debati; Bal, Satarupa
2013-09-01
This article presents a control strategy for a three-phase grid interactive voltage source inverter that links a renewable energy source to the utility grid through a LCL-type filter. An optimized LCL-type filter has been designed and modeled so as to reduce the current harmonics in the grid, considering the conduction and switching losses at constant modulation index (Ma). The control strategy adopted here decouples the active and reactive power loops, thus achieving desirable performance with independent control of active and reactive power injected into the grid. The startup transients can also be controlled by the implementation of this proposed control strategy: in addition to this, optimal LCL filter with lesser conduction and switching copper losses as well as core losses. A trade-off has been made between the total losses in the LCL filter and the Total Harmonic Distortion (THD%) of the grid current, and the filter inductor has been designed accordingly. In order to study the dynamic performance of the system and to confirm the analytical results, the models are simulated in the MATLAB/Simulink environment, and the results are analyzed.
Design Optimization of Vena Cava Filters: An application to dual filtration devices
Singer, M A; Wang, S L; Diachin, D P
2009-12-03
Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.
Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu
2015-01-01
Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247
Savran, Arman; Cao, Houwei; Shah, Miraj; Nenkova, Ani; Verma, Ragini
2012-01-01
We present experiments on fusing facial video, audio and lexical indicators for affect estimation during dyadic conversations. We use temporal statistics of texture descriptors extracted from facial video, a combination of various acoustic features, and lexical features to create regression based affect estimators for each modality. The single modality regressors are then combined using particle filtering, by treating these independent regression outputs as measurements of the affect states in a Bayesian filtering framework, where previous observations provide prediction about the current state by means of learned affect dynamics. Tested on the Audio-visual Emotion Recognition Challenge dataset, our single modality estimators achieve substantially higher scores than the official baseline method for every dimension of affect. Our filtering-based multi-modality fusion achieves correlation performance of 0.344 (baseline: 0.136) and 0.280 (baseline: 0.096) for the fully continuous and word level sub challenges, respectively. PMID:25300451
Voglis, Costas
Global Optimization Framework Draft version C. Voglis Computer Science Department University the memetic global optimization first described in [13] and incorporates Merlin optimization environment [10 Descriptors G.1.6 [Numerical Analysis]: Optimization--global opti- mization, memetic algorithms, particle
Complex Stock Trading Strategy Based on Particle Swarm Optimization
Cheung, David Wai-lok
Complex Stock Trading Strategy Based on Particle Swarm Optimization Fei Wang, Philip L.H. Yu and David W. Cheung Abstract-- Trading rules have been utilized in the stock market to make profit for more than a century. However, only using a single trading rule may not be sufficient to predict the stock
Mobile Robot Navigation Using Particle Swarm Optimization and Adaptive NN
Li, Yangmin
Mobile Robot Navigation Using Particle Swarm Optimization and Adaptive NN Yangmin Li and Xin Chen trajectories. Based on this property, a PSO algorithm for path planning is proposed. The path planning generates smooth path with low computational cost to avoid obstacles, so that robot can use smooth control
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
A fast particle filter object tracking algorithm by dual features fusion
NASA Astrophysics Data System (ADS)
Zhao, Shou-wei; Wang, Wei-ming; Ma, Sa-sa; Zhang, Yong; Yu, Ming
2014-11-01
Under the particle filtering framework, a video object tracking method described by dual cues extracting from integral histogram and integral image is proposed. The method takes both the color histogram feature and the Harr-like feature of the target region as the feature representation model, tracking the target region by particle filter. In the premise of ensuring the real-time responsiveness, it overcomes the shortcomings of poor precision, large fluctuations, light sensitive defects and so on by only relying on histogram feature tracking. It shows high efficiency by tracking the target object in multiple video sequences. Finally, it is applied in the augmented reality assisted maintenance prototype system, which proves that the method can be used in the tracking registration process of the augmented reality system based on natural feature.
Wang, Bo; Xiao, Xuan; Xia, Yuanqing; Fu, Mengyin
2013-01-01
Shipboard is not an absolute rigid body. Many factors could cause deformations which lead to large errors of mounted devices, especially for the navigation systems. Such errors should be estimated and compensated effectively, or they will severely reduce the navigation accuracy of the ship. In order to estimate the deformation, an unscented particle filter method for estimation of shipboard deformation based on an inertial measurement unit is presented. In this method, a nonlinear shipboard deformation model is built. Simulations demonstrated the accuracy reduction due to deformation. Then an attitude plus angular rate match mode is proposed as a frame to estimate the shipboard deformation using inertial measurement units. In this frame, for the nonlinearity of the system model, an unscented particle filter method is proposed to estimate and compensate the deformation angles. Simulations show that the proposed method gives accurate and rapid deformation estimations, which can increase navigation accuracy after compensation of deformation. PMID:24248280
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel
2004-01-01
Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.
Particle Filters for Real-Time Fault Detection in Planetary Rovers
NASA Technical Reports Server (NTRS)
Dearden, Richard; Clancy, Dan; Koga, Dennis (Technical Monitor)
2001-01-01
Planetary rovers provide a considerable challenge for robotic systems in that they must operate for long periods autonomously, or with relatively little intervention. To achieve this, they need to have on-board fault detection and diagnosis capabilities in order to determine the actual state of the vehicle, and decide what actions are safe to perform. Traditional model-based diagnosis techniques are not suitable for rovers due to the tight coupling between the vehicle's performance and its environment. Hybrid diagnosis using particle filters is presented as an alternative, and its strengths and weakeners are examined. We also present some extensions to particle filters that are designed to make them more suitable for use in diagnosis problems.
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Zheng, Jason Xin; Nguyen, Kayla; He, Yutao
2010-01-01
Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.
3D Air Filtration Modeling for Nanofiber Based Filters in the Ultrafine Particle Size Range
NASA Astrophysics Data System (ADS)
Sambaer, Wannes; Zatloukal, Martin; Kimmer, Dusan
2011-07-01
In this work, novel 3D filtration model for nanofiber based filters has been proposed and tested. For the model validation purposes, filtration efficiency characteristics of two different polyurethane nanofiber based structures (prepared by the electrospinning process) were determined experimentally in the ultrafine particle size range (20-400 nm). It has been found that the proposed model is able to reasonably predict the measured filtration efficiency curves for both tested samples.
Jaeschke, B C; Lind, O C; Bradshaw, C; Salbu, B
2015-01-01
Radioactive particles are aggregates of radioactive atoms that may contain significant activity concentrations. They have been released into the environment from nuclear weapons tests, and from accidents and effluents associated with the nuclear fuel cycle. Aquatic filter-feeders can capture and potentially retain radioactive particles, which could then provide concentrated doses to nearby tissues. This study experimentally investigated the retention and effects of radioactive particles in the blue mussel, Mytilus edulis. Spent fuel particles originating from the Dounreay nuclear establishment, and collected in the field, comprised a U and Al alloy containing fission products such as (137)Cs and (90)Sr/(90)Y. Particles were introduced into mussels in suspension with plankton-food or through implantation in the extrapallial cavity. Of the particles introduced with food, 37% were retained for 70 h, and were found on the siphon or gills, with the notable exception of one particle that was ingested and found in the stomach. Particles not retained seemed to have been actively rejected and expelled by the mussels. The largest and most radioactive particle (estimated dose rate 3.18 ± 0.06 Gyh(-1)) induced a significant increase in Comet tail-DNA %. In one case this particle caused a large white mark (suggesting necrosis) in the mantle tissue with a simultaneous increase in micronucleus frequency observed in the haemolymph collected from the muscle, implying that non-targeted effects of radiation were induced by radiation from the retained particle. White marks found in the tissue were attributed to ionising radiation and physical irritation. The results indicate that current methods used for risk assessment, based upon the absorbed dose equivalent limit and estimating the "no-effect dose" are inadequate for radioactive particle exposures. Knowledge is lacking about the ecological implications of radioactive particles released into the environment, for example potential recycling within a population, or trophic transfer in the food chain. PMID:25240099
Segmentation of nerve bundles and ganglia in spine MRI using particle filters.
Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina
2011-01-01
Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bézier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741
Segmentation of Nerve Bundles and Ganglia in Spine MRI Using Particle Filters
Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina
2011-01-01
Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bézier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741
Robust Dead Reckoning System for Mobile Robots Based on Particle Filter and Raw Range Scan
Duan, Zhuohua; Cai, Zixing; Min, Huaqing
2014-01-01
Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318
Robust dead reckoning system for mobile robots based on particle filter and raw range scan.
Duan, Zhuohua; Cai, Zixing; Min, Huaqing
2014-01-01
Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318
Kornelakis, Aris
2010-12-15
Particle Swarm Optimization (PSO) is a highly efficient evolutionary optimization algorithm. In this paper a multiobjective optimization algorithm based on PSO applied to the optimal design of photovoltaic grid-connected systems (PVGCSs) is presented. The proposed methodology intends to suggest the optimal number of system devices and the optimal PV module installation details, such that the economic and environmental benefits achieved during the system's operational lifetime period are both maximized. The objective function describing the economic benefit of the proposed optimization process is the lifetime system's total net profit which is calculated according to the method of the Net Present Value (NPV). The second objective function, which corresponds to the environmental benefit, equals to the pollutant gas emissions avoided due to the use of the PVGCS. The optimization's decision variables are the optimal number of the PV modules, the PV modules optimal tilt angle, the optimal placement of the PV modules within the available installation area and the optimal distribution of the PV modules among the DC/AC converters. (author)
On optimal filtering of GPS dual frequency observations without using orbit information
NASA Technical Reports Server (NTRS)
Eueler, Hans-Juergen; Goad, Clyde C.
1991-01-01
The concept of optimal filtering of observations collected with a dual frequency GPS P-code receiver is investigated in comparison to an approach for C/A-code units. The filter presented here uses only data gathered between one receiver and one satellite. The estimated state vector consists of a one-way pseudorange, ionospheric influence, and ambiguity biases. Neither orbit information nor station information is required. The independently estimated biases are used to form double differences where, in case of a P-code receiver, the wide lane integer ambiguities are usually recovered successfully except when elevation angles are very small. An elevation dependent uncertainty for pseudorange measurements was discovered for different receiver types. An exponential model for the pseudorange uncertainty was used with success in the filter gain computations.
Optimized SU-8 UV-lithographical process for a Ka-band filter fabrication
NASA Astrophysics Data System (ADS)
Jin, Peng; Jiang, Kyle; Tan, Jiubin; Lancaster, M. J.
2005-04-01
Rapidly expanding of millimeter wave communication has made Ka-band filter fabrication to gain more and more attention from the researcher. Described in this paper is a high quality UV-lithographic process for making high aspect ratio parts of a coaxial Ka band dual mode filter using an ultra-thick SU-8 photoresist layer, which has a potential application in LMDS systems. Due to the strict requirements on the perpendicular geometry of the filter parts, the microfabrication research work has been concentrated on modifying the SU-8 UV-lithographical process to improve the vertical angle of sidewalls and high aspect ratio. Based on the study of the photoactive property of ultra-thick SU-8 layers, an optimized prebake time has been found for obtaining the minimum UV absorption by SU-8. The optimization principle has been tested using a series of experiments of UV-lithography on different prebake times, and proved effective. An optimized SU-8 UV-lithographical process has been developed for the fabrication of thick layer filter structures. During the test fabrication, microstructures with aspect ratio as high as 40 have been produced in 1000 mm ultra-thick SU-8 layers using the standard UV-lithography equipment. The sidewall angles are controlled between 85~90 degrees. The high quality SU-8 structures will then be used as positive moulds for producing copper structures using electroforming process. The microfabication process presented in this paper suits the proposed filter well. It also reveals a good potential for volume production of high quality RF devices.
Optimal design of a bank of spatio-temporal filters for EEG signal classification.
Higashi, Hiroshi; Tanaka, Toshihisa
2011-01-01
The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery. PMID:22255731
Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination
NASA Technical Reports Server (NTRS)
Downie, John D.
1992-01-01
Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.
Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization
NASA Technical Reports Server (NTRS)
Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.
1999-01-01
Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using micro-lithographic techniques and used ir spectral imaging applications will be presented.
Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization
NASA Technical Reports Server (NTRS)
Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.
1998-01-01
Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using microlithographic techniques and used in spectral imaging applications will be presented.
Fennel, Katja
and satellite observations Jann Paul Mattern,1,2 Michael Dowd,1 and Katja Fennel2 Received 15 November 2012. Dowd, and K. Fennel (2013), Particle filter-based data assimilation for a 3-dimensional biological
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy.
Wang, Ke; Qiu, Ping
2015-05-01
Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration. PMID:25950644
Optimal spectral filtering in soliton self-frequency shift for deep-tissue multiphoton microscopy
NASA Astrophysics Data System (ADS)
Wang, Ke; Qiu, Ping
2015-05-01
Tunable optical solitons generated by soliton self-frequency shift (SSFS) have become valuable tools for multiphoton microscopy (MPM). Recent progress in MPM using 1700 nm excitation enabled visualizing subcortical structures in mouse brain in vivo for the first time. Such an excitation source can be readily obtained by SSFS in a large effective-mode-area photonic crystal rod with a 1550-nm fiber femtosecond laser. A longpass filter was typically used to isolate the soliton from the residual in order to avoid excessive energy deposit on the sample, which ultimately leads to optical damage. However, since the soliton was not cleanly separated from the residual, the criterion for choosing the optimal filtering wavelength is lacking. Here, we propose maximizing the ratio between the multiphoton signal and the n'th power of the excitation pulse energy as a criterion for optimal spectral filtering in SSFS when the soliton shows dramatic overlapping with the residual. This optimization is based on the most efficient signal generation and entirely depends on physical quantities that can be easily measured experimentally. Its application to MPM may reduce tissue damage, while maintaining high signal levels for efficient deep penetration.
Research of spatial high-pass filtering algorithm in particles real-time measurement system
NASA Astrophysics Data System (ADS)
Jin, Xuanhong; Dai, Shuguang; Mu, Pingan
2010-08-01
With the application development of CIMS, enterprises have the more need of the CAQ systems during the process of flexibility and automation. Based the means of computer-based vision technology, Automated Visual Inspection (AVI) is a non-contact measurement mean synthesizing the technologies such as image processing, precision measurement. The particles real-time measurement system is the system which analyzes the target image obtained by the computer vision system and gets the useful measure information. In accordance with existing prior knowledge, the user can timely take some measures to reduce the floating ash. According to the analysis of the particle images, this paper researches the image high-pass filter means, Gradient arithmetic, with characteristics of images. In order to get rid of the interference of background and enhance the edge lines of particles, it uses the two directions kernel to process the images. This Spatial high-pass filtering algorithm also helps to conduct the ensuing image processing to obtain useful information of floating ash particles.
Filter feeders and plankton increase particle encounter rates through flow regime control
Humphries, Stuart
2009-01-01
Collisions between particles or between particles and other objects are fundamental to many processes that we take for granted. They drive the functioning of aquatic ecosystems, the onset of rain and snow precipitation, and the manufacture of pharmaceuticals, powders and crystals. Here, I show that the traditional assumption that viscosity dominates these situations leads to consistent and large-scale underestimation of encounter rates between particles and of deposition rates on surfaces. Numerical simulations reveal that the encounter rate is Reynolds number dependent and that encounter efficiencies are consistent with the sparse experimental data. This extension of aerosol theory has great implications for understanding of selection pressure on the physiology and ecology of organisms, for example filter feeders able to gather food at rates up to 5 times higher than expected. I provide evidence that filter feeders have been strongly selected to take advantage of this flow regime and show that both the predicted peak concentration and the steady-state concentrations of plankton during blooms are ?33% of that predicted by the current models of particle encounter. Many ecological and industrial processes may be operating at substantially greater rates than currently assumed. PMID:19416879
NASA Astrophysics Data System (ADS)
Kirchstetter, T.; Preble, C.; Dallmann, T. R.; DeMartini, S. J.; Tang, N. W.; Kreisberg, N. M.; Hering, S. V.; Harley, R. A.
2013-12-01
Diesel particle filters have become widely used in the United States since the introduction in 2007 of a more stringent exhaust particulate matter emission standard for new heavy-duty diesel vehicle engines. California has instituted additional regulations requiring retrofit or replacement of older in-use engines to accelerate emission reductions and air quality improvements. This presentation summarizes pollutant emission changes measured over several field campaigns at the Port of Oakland in the San Francisco Bay Area associated with diesel particulate filter use and accelerated modernization of the heavy-duty truck fleet. Pollutants in the exhaust plumes of hundreds of heavy-duty trucks en route to the Port were measured in 2009, 2010, 2011, and 2013. Ultrafine particle number, black carbon (BC), nitrogen oxides (NOx), and nitrogen dioxide (NO2) concentrations were measured at a frequency ? 1 Hz and normalized to measured carbon dioxide concentrations to quantify fuel-based emission factors (grams of pollutant emitted per kilogram of diesel consumed). The size distribution of particles in truck exhaust plumes was also measured at 1 Hz. In the two most recent campaigns, emissions were linked on a truck-by-truck basis to installed emission control equipment via the matching of transcribed license plates to a Port truck database. Accelerated replacement of older engines with newer engines and retrofit of trucks with diesel particle filters reduced fleet-average emissions of BC and NOx. Preliminary results from the two most recent field campaigns indicate that trucks without diesel particle filters emit 4 times more BC than filter-equipped trucks. Diesel particle filters increase emissions of NO2, however, and filter-equipped trucks have NO2/NOx ratios that are 4 to 7 times greater than trucks without filters. Preliminary findings related to particle size distribution indicate that (a) most trucks emitted particles characterized by a single mode of approximately 100 nm in diameter and (b) new trucks originally equipped with diesel particle filters were 5 to 6 times more likely than filter-retrofitted trucks and trucks without filters to emit particles characterized by a single mode in the range of 10 to 30 nm in diameter.
Optimal Pid Tuning for Power System Stabilizers Using Adaptive Particle Swarm Optimization Technique
NASA Astrophysics Data System (ADS)
Oonsivilai, Anant; Marungsri, Boonruang
2008-10-01
An application of the intelligent search technique to find optimal parameters of power system stabilizer (PSS) considering proportional-integral-derivative controller (PID) for a single-machine infinite-bus system is presented. Also, an efficient intelligent search technique, adaptive particle swarm optimization (APSO), is engaged to express usefulness of the intelligent search techniques in tuning of the PID—PSS parameters. Improve damping frequency of system is optimized by minimizing an objective function with adaptive particle swarm optimization. At the same operating point, the PID—PSS parameters are also tuned by the Ziegler-Nichols method. The performance of proposed controller compared to the conventional Ziegler-Nichols PID tuning controller. The results reveal superior effectiveness of the proposed APSO based PID controller.
NASA Astrophysics Data System (ADS)
Huang, Haibin; Zhuang, Yufei
2015-08-01
This paper proposes a method that plans energy-optimal trajectories for multi-satellite formation reconfiguration in deep space environment. A novel co-evolutionary particle swarm optimization algorithm is stated to solve the nonlinear programming problem, so that the computational complexity of calculating the gradient information could be avoided. One swarm represents one satellite, and through communication with other swarms during the evolution, collisions between satellites can be avoided. In addition, a dynamic depth first search algorithm is proposed to solve the redundant search problem of a co-evolutionary particle swarm optimization method, with which the computation time can be shorten a lot. In order to make the actual trajectories optimal and collision-free with disturbance, a re-planning strategy is deduced for formation reconfiguration maneuver.
NASA Astrophysics Data System (ADS)
Oladyshkin, S.; Class, H.; Helmig, R.; Nowak, W.
2011-12-01
Underground flow systems, such as oil or gas reservoirs and CO2 storage sites, are an important and challenging class of complex dynamic systems. Lacking information about distributed systems properties (such as porosity, permeability,...) leads to model uncertainties up to a level where quantification of uncertainties may become the dominant question in application tasks. History matching to past production data becomes an extremely important issue in order to improve the confidence of prediction. The accuracy of history matching depends on the quality of the established physical model (including, e.g. seismic, geological and hydrodynamic characteristics, fluid properties etc). The history matching procedure itself is very time consuming from the computational point of view. Even one single forward deterministic simulation may require parallel high-performance computing. This fact makes a brute-force non-linear optimization approach not feasible, especially for large-scale simulations. We present a novel framework for history matching which takes into consideration the nonlinearity of the model and of inversion, and provides a cheap but highly accurate tool for reducing prediction uncertainty. We propose an advanced framework for history matching based on the polynomial chaos expansion (PCE). Our framework reduces complex reservoir models and consists of two main steps. In step one, the original model is projected onto a so-called integrative response surface via very recent PCE technique. This projection is totally non-intrusive (following a probabilistic collocation method) and optimally constructed for available reservoir data at the prior stage of Bayesian updating. The integrative response surface keeps the nonlinearity of the initial model at high order and incorporates all suitable parameters, such as uncertain parameters (porosity, permeability etc.) and design or control variables (injection rate, depth etc.). Technically, the computational costs for constructing the response surface depend on the number of parameters and the expansion degree. Step two consists of Bayesian updating in order to match the reduced model to available measurements of state variables or other past or real-time observations of system behavior (e.g. past production data or pressure at monitoring wells during a certain time period). In step 2 we apply particle filtering on the integrative response surface constructed at step one. Particle filtering is a strong technique for Bayesian updating which takes into consideration the nonlinearity of inverse problem in history matching more accurately than Ensemble Kalman filter do. Thanks to the computational efficiency of PCE and integrative response surface, Bayesian updating for history matching becomes an interactive task and can incorporate real time measurements.
A particle filter to reconstruct a free-surface flow from a depth camera
NASA Astrophysics Data System (ADS)
Combés, Benoit; Heitz, Dominique; Guibert, Anthony; Mémin, Etienne
2015-10-01
We investigate the combined use of a kinect depth sensor and of a stochastic data assimilation (DA) method to recover free-surface flows. More specifically, we use a weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only. This particle filter accounts for model and observations errors. This DA scheme is enhanced with the use of two observations instead of one classically. We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow. The robustness of the method to depth data errors and also to initial and inflow conditions is considered. We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions. Then, the performance of the Kinect sensor in capturing the temporal sequences of depth observations is investigated. Finally, the efficiency of the algorithm is qualified for a wave in a real rectangular flat bottomed tank. It is shown that for basic initial conditions, the particle filter rapidly and remarkably reconstructs the velocity and height of the free surface flow based on noisy measurements of the elevation alone.
Sadaghzadeh N, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin
2014-03-01
Based on a cascaded Kalman-Particle Filtering, gyroscope drift and robot attitude estimation method is proposed in this paper. Due to noisy and erroneous measurements of MEMS gyroscope, it is combined with Photogrammetry based vision navigation scenario. Quaternions kinematics and robot angular velocity dynamics with augmented drift dynamics of gyroscope are employed as system state space model. Nonlinear attitude kinematics, drift and robot angular movement dynamics each in 3 dimensions result in a nonlinear high dimensional system. To reduce the complexity, we propose a decomposition of system to cascaded subsystems and then design separate cascaded observers. This design leads to an easier tuning and more precise debugging from the perspective of programming and such a setting is well suited for a cooperative modular system with noticeably reduced computation time. Kalman Filtering (KF) is employed for the linear and Gaussian subsystem consisting of angular velocity and drift dynamics together with gyroscope measurement. The estimated angular velocity is utilized as input of the second Particle Filtering (PF) based observer in two scenarios of stochastic and deterministic inputs. Simulation results are provided to show the efficiency of the proposed method. Moreover, the experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. PMID:24342270
Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing
2015-01-01
An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate. PMID:26064085
PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS
Xia, Jing; Wang, Michelle Yongmei
2015-01-01
Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.
Garro, Beatriz A; Vázquez, Roberto A
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132
Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms
Garro, Beatriz A.; Vázquez, Roberto A.
2015-01-01
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132
Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng
2009-10-01
The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.
The use of an inert, radioactively labeled microsphere as a measure of particle accumulation (filtration activity) by Mulinia lateralis (Say) and Mytilus edulis L. was evaluated. Bottom sediment plus temperature and salinity of the water were varied to induce changes in filtratio...
Gravitational Lens Modeling with Genetic Algorithms and Particle Swarm Optimizers
Rogers, Adam
2011-01-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automa...
Initial parameters problem of WNN based on particle swarm optimization
NASA Astrophysics Data System (ADS)
Yang, Chi-I.; Wang, Kaicheng; Chang, Kueifang
2014-04-01
The stock price prediction by the wavelet neural network is about minimizing RMSE by adjusting the parameters of initial values of network, training data percentage, and the threshold value in order to predict the fluctuation of stock price in two weeks. The objective of this dissertation is to reduce the number of parameters to be adjusted for achieving the minimization of RMSE. There are three kinds of parameters of initial value of network: w , t , and d . The optimization of these three parameters will be conducted by the Particle Swarm Optimization method, and comparison will be made with the performance of original program, proving that RMSE can be even less than the one before the optimization. It has also been shown in this dissertation that there is no need for adjusting training data percentage and threshold value for 68% of the stocks when the training data percentage is set at 10% and the threshold value is set at 0.01.
Miller, Travis Reed
2010-01-01
This work aimed to inform the design of ceramic pot filters to be manufactured by the organization Pure Home Water (PHW) in Northern Ghana, and to model the flow through an innovative paraboloid-shaped ceramic pot filter. ...
Nanodosimetry-Based Plan Optimization for Particle Therapy
Casiraghi, Margherita; Schulte, Reinhard W.
2015-01-01
Treatment planning for particle therapy is currently an active field of research due uncertainty in how to modify physical dose in order to create a uniform biological dose response in the target. A novel treatment plan optimization strategy based on measurable nanodosimetric quantities rather than biophysical models is proposed in this work. Simplified proton and carbon treatment plans were simulated in a water phantom to investigate the optimization feasibility. Track structures of the mixed radiation field produced at different depths in the target volume were simulated with Geant4-DNA and nanodosimetric descriptors were calculated. The fluences of the treatment field pencil beams were optimized in order to create a mixed field with equal nanodosimetric descriptors at each of the multiple positions in spread-out particle Bragg peaks. For both proton and carbon ion plans, a uniform spatial distribution of nanodosimetric descriptors could be obtained by optimizing opposing-field but not single-field plans. The results obtained indicate that uniform nanodosimetrically weighted plans, which may also be radiobiologically uniform, can be obtained with this approach. Future investigations need to demonstrate that this approach is also feasible for more complicated beam arrangements and that it leads to biologically uniform response in tumor cells and tissues. PMID:26167202
Ruiz-Cruz, Riemann; Sanchez, Edgar N; Ornelas-Tellez, Fernando; Loukianov, Alexander G; Harley, Ronald G
2013-12-01
In this paper, the authors propose a particle swarm optimization (PSO) for a discrete-time inverse optimal control scheme of a doubly fed induction generator (DFIG). For the inverse optimal scheme, a control Lyapunov function (CLF) is proposed to obtain an inverse optimal control law in order to achieve trajectory tracking. A posteriori, it is established that this control law minimizes a meaningful cost function. The CLFs depend on matrix selection in order to achieve the control objectives; this matrix is determined by two mechanisms: initially, fixed parameters are proposed for this matrix by a trial-and-error method and then by using the PSO algorithm. The inverse optimal control scheme is illustrated via simulations for the DFIG, including the comparison between both mechanisms. PMID:24273145
Heuristic optimization of the scanning path of particle therapy beams
Pardo, J.; Donetti, M.; Bourhaleb, F.; Ansarinejad, A.; Attili, A.; Cirio, R.; Garella, M. A.; Giordanengo, S.; Givehchi, N.; La Rosa, A.; Marchetto, F.; Monaco, V.; Pecka, A.; Peroni, C.; Russo, G.; Sacchi, R.
2009-06-15
Quasidiscrete scanning is a delivery strategy for proton and ion beam therapy in which the beam is turned off when a slice is finished and a new energy must be set but not during the scanning between consecutive spots. Different scanning paths lead to different dose distributions due to the contribution of the unintended transit dose between spots. In this work an algorithm to optimize the scanning path for quasidiscrete scanned beams is presented. The classical simulated annealing algorithm is used. It is a heuristic algorithm frequently used in combinatorial optimization problems, which allows us to obtain nearly optimal solutions in acceptable running times. A study focused on the best choice of operational parameters on which the algorithm performance depends is presented. The convergence properties of the algorithm have been further improved by using the next-neighbor algorithm to generate the starting paths. Scanning paths for two clinical treatments have been optimized. The optimized paths are found to be shorter than the back-and-forth, top-to-bottom (zigzag) paths generally provided by the treatment planning systems. The gamma method has been applied to quantify the improvement achieved on the dose distribution. Results show a reduction of the transit dose when the optimized paths are used. The benefit is clear especially when the fluence per spot is low, as in the case of repainting. The minimization of the transit dose can potentially allow the use of higher beam intensities, thus decreasing the treatment time. The algorithm implemented for this work can optimize efficiently the scanning path of quasidiscrete scanned particle beams. Optimized scanning paths decrease the transit dose and lead to better dose distributions.
Multi-Objective Particle Swarm Optimization with time variant inertia and acceleration coefficients
Pal, Sankar Kumar
Multi-Objective Particle Swarm Optimization with time variant inertia and acceleration coefficients June 2007; accepted 23 June 2007 Abstract In this article we describe a novel Particle Swarm Optimization (PSO) approach to multi-objective optimization (MOO), called Time Variant Multi-Objective Particle
Modified Particle Filtering Algorithm for Single Acoustic Vector Sensor DOA Tracking.
Li, Xinbo; Sun, Haixin; Jiang, Liangxu; Shi, Yaowu; Wu, Yue
2015-01-01
The conventional direction of arrival (DOA) estimation algorithm with static sources assumption usually estimates the source angles of two adjacent moments independently and the correlation of the moments is not considered. In this article, we focus on the DOA estimation of moving sources and a modified particle filtering (MPF) algorithm is proposed with state space model of single acoustic vector sensor. Although the particle filtering (PF) algorithm has been introduced for acoustic vector sensor applications, it is not suitable for the case that one dimension angle of source is estimated with large deviation, the two dimension angles (pitch angle and azimuth angle) cannot be simultaneously employed to update the state through resampling processing of PF algorithm. To solve the problems mentioned above, the MPF algorithm is proposed in which the state estimation of previous moment is introduced to the particle sampling of present moment to improve the importance function. Moreover, the independent relationship of pitch angle and azimuth angle is considered and the two dimension angles are sampled and evaluated, respectively. Then, the MUSIC spectrum function is used as the "likehood" function of the MPF algorithm, and the modified PF-MUSIC (MPF-MUSIC) algorithm is proposed to improve the root mean square error (RMSE) and the probability of convergence. The theoretical analysis and the simulation results validate the effectiveness and feasibility of the two proposed algorithms. PMID:26501280
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Modified Particle Filtering Algorithm for Single Acoustic Vector Sensor DOA Tracking
Li, Xinbo; Sun, Haixin; Jiang, Liangxu; Shi, Yaowu; Wu, Yue
2015-01-01
The conventional direction of arrival (DOA) estimation algorithm with static sources assumption usually estimates the source angles of two adjacent moments independently and the correlation of the moments is not considered. In this article, we focus on the DOA estimation of moving sources and a modified particle filtering (MPF) algorithm is proposed with state space model of single acoustic vector sensor. Although the particle filtering (PF) algorithm has been introduced for acoustic vector sensor applications, it is not suitable for the case that one dimension angle of source is estimated with large deviation, the two dimension angles (pitch angle and azimuth angle) cannot be simultaneously employed to update the state through resampling processing of PF algorithm. To solve the problems mentioned above, the MPF algorithm is proposed in which the state estimation of previous moment is introduced to the particle sampling of present moment to improve the importance function. Moreover, the independent relationship of pitch angle and azimuth angle is considered and the two dimension angles are sampled and evaluated, respectively. Then, the MUSIC spectrum function is used as the “likehood” function of the MPF algorithm, and the modified PF-MUSIC (MPF-MUSIC) algorithm is proposed to improve the root mean square error (RMSE) and the probability of convergence. The theoretical analysis and the simulation results validate the effectiveness and feasibility of the two proposed algorithms. PMID:26501280
NASA Astrophysics Data System (ADS)
Mao, Jiandong; Li, Jinxuan
2015-10-01
Particle size distribution is essential for describing direct and indirect radiation of aerosols. Because the relationship between the aerosol size distribution and optical thickness (AOT) is an ill-posed Fredholm integral equation of the first type, the traditional techniques for determining such size distributions, such as the Phillips-Twomey regularization method, are often ambiguous. Here, we use an approach based on an improved particle swarm optimization algorithm (IPSO) to retrieve aerosol size distribution. Using AOT data measured by a CE318 sun photometer in Yinchuan, we compared the aerosol size distributions retrieved using a simple genetic algorithm, a basic particle swarm optimization algorithm and the IPSO. Aerosol size distributions for different weather conditions were analyzed, including sunny, dusty and hazy conditions. Our results show that the IPSO-based inversion method retrieved aerosol size distributions under all weather conditions, showing great potential for similar size distribution inversions.
Optimal hydrograph separation filter to evaluate transport routines of hydrological models
NASA Astrophysics Data System (ADS)
Rimmer, Alon; Hartmann, Andreas
2014-05-01
Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is an attempt to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to develop a benchmark model that can be used as a geochemical model itself, or to test the performance of process based hydro-geochemical models. The benchmark model quantifies the degree of knowledge that the stream flow time series itself contributes to the hydrochemical analysis. Results of the OHS show that the two HS fractions ("rapid" and "slow") differ according to the geochemical substances which were selected. The OHS parameters were then used to demonstrate how to develop benchmark model for hydro-chemical predictions. Finally, predictions of solute transport from a process-based hydrological model were compared to the proposed benchmark model. Our results indicate that the benchmark model illustrated and quantified the contribution of the modeling procedure better than only using traditional measures like r2 or the Nash-Sutcliffe efficiency.
Particle swarm optimization of ascent trajectories of multistage launch vehicles
NASA Astrophysics Data System (ADS)
Pontani, Mauro
2014-02-01
Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state and costate components, the coast duration, and the upper stage thrust duration. In addition, a simple approach is introduced and successfully applied with the purpose of satisfying exactly the path constraint related to the maximum dynamical pressure in the atmospheric phase. The basic version of the swarming technique, which is used in this research, is extremely simple and easy to program. Nevertheless, the algorithm proves to be capable of yielding the optimal rocket trajectory with a very satisfactory numerical accuracy.
Particle swarm optimization for the clustering of wireless sensors
NASA Astrophysics Data System (ADS)
Tillett, Jason C.; Rao, Raghuveer M.; Sahin, Ferat; Rao, T. M.
2003-07-01
Clustering is necessary for data aggregation, hierarchical routing, optimizing sleep patterns, election of extremal sensors, optimizing coverage and resource allocation, reuse of frequency bands and codes, and conserving energy. Optimal clustering is typically an NP-hard problem. Solutions to NP-hard problems involve searches through vast spaces of possible solutions. Evolutionary algorithms have been applied successfully to a variety of NP-hard problems. We explore one such approach, Particle Swarm Optimization (PSO), an evolutionary programming technique where a 'swarm' of test solutions, analogous to a natural swarm of bees, ants or termites, is allowed to interact and cooperate to find the best solution to the given problem. We use the PSO approach to cluster sensors in a sensor network. The energy efficiency of our clustering in a data-aggregation type sensor network deployment is tested using a modified LEACH-C code. The PSO technique with a recursive bisection algorithm is tested against random search and simulated annealing; the PSO technique is shown to be robust. We further investigate developing a distributed version of the PSO algorithm for clustering optimally a wireless sensor network.
Particle Swarm and Ant Colony Approaches in Multiobjective Optimization
NASA Astrophysics Data System (ADS)
Rao, S. S.
2010-10-01
The social behavior of groups of birds, ants, insects and fish has been used to develop evolutionary algorithms known as swarm intelligence techniques for solving optimization problems. This work presents the development of strategies for the application of two of the popular swarm intelligence techniques, namely the particle swarm and ant colony methods, for the solution of multiobjective optimization problems. In a multiobjective optimization problem, the objectives exhibit a conflicting nature and hence no design vector can minimize all the objectives simultaneously. The concept of Pareto-optimal solution is used in finding a compromise solution. A modified cooperative game theory approach, in which each objective is associated with a different player, is used in this work. The applicability and computational efficiencies of the proposed techniques are demonstrated through several illustrative examples involving unconstrained and constrained problems with single and multiple objectives and continuous and mixed design variables. The present methodologies are expected to be useful for the solution of a variety of practical continuous and mixed optimization problems involving single or multiple objectives with or without constraints.
Augmented Lagrangian Particle Swarm Optimization in Mechanism Design
NASA Astrophysics Data System (ADS)
Sedlaczek, Kai; Eberhard, Peter
The problem of optimizing nonlinear multibody systems is in general nonlinear and nonconvex. This is especially true for the dimensional synthesis process of rigid body mechanisms, where often only local solutions might be found with gradient-based optimization methods. An attractive alternative for solving such multimodal optimization problems is the Particle Swarm Optimization (PSO) algorithm. This stochastic solution technique allows a derivative-free search for a global solution without the need for any initial design. In this work, we present an extension to the basic PSO algorithm in order to solve the problem of dimensional synthesis with nonlinear equality and inequality constraints. It utilizes the Augmented Lagrange Multiplier Method in combination with an advanced non-stationary penalty function approach that does not rely on excessively large penalty factors for sufficiently accurate solutions. Although the PSO method is even able to solve nonsmooth and discrete problems, this augmented algorithm can additionally calculate accurate Lagrange multiplier estimates for differentiable formulations, which are helpful in the analysis process of the optimization results. We demonstrate this method and show its very promising applicability to the constrained dimensional synthesis process of rigid body mechanisms.
Reducing nonlinear waveform distortion in IM/DD systems by optimized receiver filtering
NASA Astrophysics Data System (ADS)
Zhou, Y. R.; Watkins, L. R.
1994-09-01
Nonlinear waveform distortion caused by the combined effect of fiber chromatic dispersion, self-phase modulation, and amplifier noise limits the attainable performance of high bit-rate, long haul optically repeatered systems. Signal processing in the receiver is investigated and found to be effective in reducing the penalty caused by this distortion. Third order low pass filters, with and without a tapped delay line equalizer are considered. The pole locations or the tap weights are optimized with respect to a minimum bit error rate criterion which accommodates distortion, pattern effects, decision time, threshold setting and noise contributions. The combination of a third order Butterworth filter and a five-tap, fractionally spaced equalizer offers more than 4 dB benefit at 4000 km compared with conventional signal processing designs.
Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we me...
NASA Astrophysics Data System (ADS)
Kiani, Maryam; Pourtakdoust, Seid H.
2014-12-01
A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.
Han, Shuxin; Yue, Qinyan; Yue, Min; Gao, Baoyu; Li, Qian; Yu, Hui; Zhao, Yaqin; Qi, Yuanfeng
2009-11-15
Novel filter media-sludge-fly ash ceramic particles (SFCP) were prepared using dewatered sludge, fly ash and clay with a mass ratio of 1:1:1. Compared with commercial ceramic particles (CCP), SFCP had higher total porosity, larger total surface area and lower bulk and apparent density. Tests of heavy metal elements in lixivium proved that SFCP were safe for wastewater treatment. A lab-scale upflow anaerobic bioreactor was employed to ascertain the application of SFCP in denitrification process using acetate as carbon source. The results showed that SFCP reactor brought a relative superiority to CCP reactor in terms of total nitrogen (TN) removal at the optimum C/N ratio of 4.03 when volumetric loading rates (VLR) ranged from 0.33 to 3.69 kg TN (m(3)d)(-1). Therefore, SFCP application, as a novel process of treating wastes with wastes, provided a promising way in sludge and fly ash utilization. PMID:19608336
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Optimization of particle fluence in micromachining of CR-39
NASA Astrophysics Data System (ADS)
Rajta, I.; Baradács, E.; Bettiol, A. A.; Csige, I.; T?kési, K.; Budai, L.; Kiss, Á. Z.
2005-04-01
Polyallyl diglycol carbonate (CR-39 etched track detector) material was irradiated with various doses of 2 MeV protons and alpha-particles in order to optimize the fluence for P-beam writing of CR-39. Irradiation were performed at the Institute of Nuclear Research, Debrecen, Hungary and at the National University of Singapore. Post irradiation work has been carried out in Debrecen. The fluence in the irradiated area was sufficiently high that the latent tracks overlapped and the region could be removed collectively by short etching times of the order of less than 1 min. Theoretical calculations based on analytical and Monte Carlo simulations were done in order to calculate the probability of multiple latent track overlap. Optimal particle fluence was found by minimising the fluence and etching time at which collective removal of latent tracks could be observed. Short etching time is required to obtain high resolution microstructures, while low particle fluence is desirable for economical reasons, and also because high fluences increase the risk of unwanted damage (e.g. melting).
Heeb, Norbert V; Rey, Maria Dolores; Zennegg, Markus; Haag, Regula; Wichser, Adrian; Schmid, Peter; Seiler, Cornelia; Honegger, Peter; Zeyer, Kerstin; Mohn, Joachim; Bürki, Samuel; Zimmerli, Yan; Czerwinski, Jan; Mayer, Andreas
2015-08-01
Iron-catalyzed diesel particle filters (DPFs) are widely used for particle abatement. Active catalyst particles, so-called fuel-borne catalysts (FBCs), are formed in situ, in the engine, when combusting precursors, which were premixed with the fuel. The obtained iron oxide particles catalyze soot oxidation in filters. Iron-catalyzed DPFs are considered as safe with respect to their potential to form polychlorinated dibenzodioxins/furans (PCDD/Fs). We reported that a bimetallic potassium/iron FBC supported an intense PCDD/F formation in a DPF. Here, we discuss the impact of fatty acid methyl ester (FAME) biofuel on PCDD/F emissions. The iron-catalyzed DPF indeed supported a PCDD/F formation with biofuel but remained inactive with petroleum-derived diesel fuel. PCDD/F emissions (I-TEQ) increased 23-fold when comparing biofuel and diesel data. Emissions of 2,3,7,8-TCDD, the most toxic congener [toxicity equivalence factor (TEF) = 1.0], increased 90-fold, and those of 2,3,7,8-TCDF (TEF = 0.1) increased 170-fold. Congener patterns also changed, indicating a preferential formation of tetra- and penta-chlorodibenzofurans. Thus, an inactive iron-catalyzed DPF becomes active, supporting a PCDD/F formation, when operated with biofuel containing impurities of potassium. Alkali metals are inherent constituents of biofuels. According to the current European Union (EU) legislation, levels of 5 ?g/g are accepted. We conclude that risks for a secondary PCDD/F formation in iron-catalyzed DPFs increase when combusting potassium-containing biofuels. PMID:26176879
GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS
Rogers, Adam; Fiege, Jason D.
2011-02-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image {chi}{sup 2} and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest {chi}{sup 2} is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
Gravitational Lens Modeling with Genetic Algorithms and Particle Swarm Optimizers
NASA Astrophysics Data System (ADS)
Rogers, Adam; Fiege, Jason D.
2011-02-01
Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our "matrix-free" approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image ?2 and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest ?2 is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.
NASA Astrophysics Data System (ADS)
Shao, Gui-Fang; Wang, Ting-Na; Liu, Tun-Dong; Chen, Jun-Ren; Zheng, Ji-Wen; Wen, Yu-Hua
2015-01-01
Pt-Pd alloy nanoparticles, as potential catalyst candidates for new-energy resources such as fuel cells and lithium ion batteries owing to their excellent reactivity and selectivity, have aroused growing attention in the past years. Since structure determines physical and chemical properties of nanoparticles, the development of a reliable method for searching the stable structures of Pt-Pd alloy nanoparticles has become of increasing importance to exploring the origination of their properties. In this article, we have employed the particle swarm optimization algorithm to investigate the stable structures of alloy nanoparticles with fixed shape and atomic proportion. An improved discrete particle swarm optimization algorithm has been proposed and the corresponding scheme has been presented. Subsequently, the swap operator and swap sequence have been applied to reduce the probability of premature convergence to the local optima. Furthermore, the parameters of the exchange probability and the 'particle' size have also been considered in this article. Finally, tetrahexahedral Pt-Pd alloy nanoparticles has been used to test the effectiveness of the proposed method. The calculated results verify that the improved particle swarm optimization algorithm has superior convergence and stability compared with the traditional one.
A challenge for theranostics: is the optimal particle for therapy also optimal for diagnostics?
NASA Astrophysics Data System (ADS)
Dreifuss, Tamar; Betzer, Oshra; Shilo, Malka; Popovtzer, Aron; Motiei, Menachem; Popovtzer, Rachela
2015-09-01
Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications.Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03119b
Panorama parking assistant system with improved particle swarm optimization method
NASA Astrophysics Data System (ADS)
Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong
2013-10-01
A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.
What is Particle Swarm optimization? Application to hydrogeophysics (Invited)
NASA Astrophysics Data System (ADS)
Fernández Martïnez, J.; García Gonzalo, E.; Mukerji, T.
2009-12-01
Inverse problems are generally ill-posed. This yields lack of uniqueness and/or numerical instabilities. These features cause local optimization methods without prior information to provide unpredictable results, not being able to discriminate among the multiple models consistent with the end criteria. Stochastic approaches to inverse problems consist in shifting attention to the probability of existence of certain interesting subsurface structures instead of "looking for a unique model". Some well-known stochastic methods include genetic algorithms and simulated annealing. A more recent method, Particle Swarm Optimization, is a global optimization technique that has been successfully applied to solve inverse problems in many engineering fields, although its use in geosciences is still limited. Like all stochastic methods, PSO requires reasonably fast forward modeling. The basic idea behind PSO is that each model searches the model space according to its misfit history and the misfit of the other models of the swarm. PSO algorithm can be physically interpreted as a damped spring-mass system. This physical analogy was used to define a whole family of PSO optimizers and to establish criteria, based on the stability of particle swarm trajectories, to tune the PSO parameters: inertia, local and global accelerations. In this contribution we show application to different low-cost hydrogeophysical inverse problems: 1) a salt water intrusion problem using Vertical Electrical Soundings, 2) the inversion of Spontaneous Potential data for groundwater modeling, 3) the identification of Cole-Cole parameters for Induced Polarization data. We show that with this stochastic approach we are able to answer questions related to risk analysis, such as what is the depth of the salt intrusion with a certain probability, or giving probabilistic bounds for the water table depth. Moreover, these measures of uncertainty are obtained with small computational cost and time, allowing us a very dynamical and practical analysis.
A Software Tool for Data Clustering Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Manda, Kalyani; Hanuman, A. Sai; Satapathy, Suresh Chandra; Chaganti, Vinaykumar; Babu, A. Vinaya
Many universities all over the world have been offering courses on swarm intelligence from 1990s. Particle Swarm Optimization is a swarm intelligence technique. It is relatively young, with a pronounce need for a mature teaching method. This paper presents an educational software tool in MATLAB to aid the teaching of PSO fundamentals and its applications to data clustering. This software offers the advantage of running the classical K-Means clustering algorithm and also provides facility to simulate hybridization of K-Means with PSO to explore better clustering performances. The graphical user interfaces are user-friendly and offer good learning scope to aspiring learners of PSO.
Particle swarm optimization applied to automatic lens design
NASA Astrophysics Data System (ADS)
Qin, Hua
2011-06-01
This paper describes a novel application of Particle Swarm Optimization (PSO) technique to lens design. A mathematical model is constructed, and merit functions in an optical system are employed as fitness functions, which combined radiuses of curvature, thicknesses among lens surfaces and refractive indices regarding an optical system. By using this function, the aberration correction is carried out. A design example using PSO is given. Results show that PSO as optical design tools is practical and powerful, and this method is no longer dependent on the lens initial structure and can arbitrarily create search ranges of structural parameters of a lens system, which is an important step towards automatic design with artificial intelligence.
Solving EMG-force relationship using Particle Swarm Optimization.
Botter, Alberto; Marateb, Hamid R; Afsharipour, Babak; Merletti, Roberto
2011-01-01
The Particle Swarm Optimization (PSO) algorithm is applied to the problem of "load sharing" among muscles acting on the same joint for the purpose of estimating their individual mechanical contribution based on their EMG and on the total torque. Compared to the previously tested Interior-Reflective Newton Algorithm (IRNA), PSO is more computationally demanding. The mean square error between the experimental and reconstructed torque is similar for the two algorithms. However, IRNA requires multiple initializations and tighter constraints found by trial-and-errors for the input variables to find a suitable optimum which is not the case for PSO whose initialization is random. PMID:22255182
PMSM Driver Based on Hybrid Particle Swarm Optimization and CMAC
NASA Astrophysics Data System (ADS)
Tu, Ji; Cao, Shaozhong
A novel hybrid particle swarm optimization (PSO) and cerebellar model articulation controller (CMAC) is introduced to the permanent magnet synchronous motor (PMSM) driver. PSO can simulate the random learning among the individuals of population and CMAC can simulate the self-learning of an individual. To validate the ability and superiority of the novel algorithm, experiments and comparisons have been done in MATLAB/SIMULINK. Analysis among PSO, hybrid PSO-CMAC and CMAC feed-forward control is also given. The results prove that the electric torque ripple and torque disturbance of the PMSM driver can be reduced by using the hybrid PSO-CMAC algorithm.
Gerencser, Akos A.; Doczi, Judit; Töröcsik, Beata; Bossy-Wetzel, Ella; Adam-Vizi, Vera
2008-01-01
Mitochondrial swelling is a hallmark of mitochondrial dysfunction, and is an indicator of the opening of the mitochondrial permeability transition pore. We introduce here a novel quantitative in situ single-cell assay of mitochondrial swelling based on standard wide-field or confocal fluorescence microscopy. This morphometric technique quantifies the relative diameter of mitochondria labeled by targeted fluorescent proteins. Fluorescence micrographs are spatial bandpass filtered transmitting either high or low spatial frequencies. Mitochondrial swelling is measured by the fluorescence intensity ratio of the high- to low-frequency filtered copy of the same image. We have termed this fraction the “thinness ratio”. The filters are designed by numeric optimization for sensitivity. We characterized the thinness ratio technique by modeling microscopic image formation and by experimentation in cultured cortical neurons and astrocytes. The frequency domain image processing endows robustness and subresolution sensitivity to the thinness ratio technique, overcoming the limitations of shape measurement approaches. The thinness ratio proved to be highly sensitive to mitochondrial swelling, but insensitive to fission or fusion of mitochondria. We found that in situ astrocytic mitochondria swell upon short-term uncoupling or inhibition of oxidative phosphorylation, whereas such responses are absent in cultured cortical neurons. PMID:18424491
NASA Astrophysics Data System (ADS)
Somasundaram, P.; Muthuselvan, N. B.
This paper presents new computationally efficient improved Particle Swarm algorithms for solving Security Constrained Optimal Power Flow (SCOPF) in power systems with the inclusion of FACTS devices. The proposed algorithms are developed based on the combined application of Gaussian and Cauchy Probability distribution functions incorporated in Particle Swarm Optimization (PSO). The power flow algorithm with the presence of Static Var Compensator (SVC) Thyristor Controlled Series Capacitor (TCSC) and Unified Power Flow Controller (UPFC), has been formulated and solved. The proposed algorithms are tested on standard IEEE 30-bus system. The analysis using PSO and modified PSO reveals that the proposed algorithms are relatively simple, efficient, reliable and suitable for real-time applications. And these algorithms can provide accurate solution with fast convergence and have the potential to be applied to other power engineering problems.
Towards Optimal Filtering on ARM for ATLAS Tile Calorimeter Front-End Processing
NASA Astrophysics Data System (ADS)
Cox, Mitchell A.
2015-10-01
The Large Hadron Collider at CERN generates enormous amounts of raw data which presents a serious computing challenge. After planned upgrades in 2022, the data output from the ATLAS Tile Calorimeter will increase by 200 times to over 40 Tb/s. Advanced and characteristically expensive Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) are currently used to process this quantity of data. It is proposed that a cost- effective, high data throughput Processing Unit (PU) can be developed by using several ARM System on Chips in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. ARM is a cost effective and energy efficient alternative CPU architecture to the long established x86 architecture. This PU could be used for a variety of high-level algorithms on the high data throughput raw data. An Optimal Filtering algorithm has been implemented in C++ and several ARM platforms have been tested. Optimal Filtering is currently used in the ATLAS Tile Calorimeter front-end for basic energy reconstruction and is currently implemented on DSPs.
Particle Swarm Optimization in Comparison with Classical Optimization for GPS Network Design
NASA Astrophysics Data System (ADS)
Doma, M. I.
2013-12-01
The Global Positioning System (GPS) is increasingly coming into use to establish geodetic networks. In order to meet the established aims of a geodetic network, it has to be optimized, depending on design criteria. Optimization of a GPS network can be carried out by selecting baseline vectors from all of the probable baseline vectors that can be measured in a GPS network. Classically, a GPS network can be optimized using the trial and error method or analytical methods such as linear or nonlinear programming, or in some cases by generalized or iterative generalized inverses. Optimization problems may also be solved by intelligent optimization techniques such as Genetic Algorithms (GAs), Simulated Annealing (SA) and Particle Swarm Optimization (PSO) algorithms. The purpose of the present paper is to show how the PSO can be used to design a GPS network. Then, the efficiency and the applicability of this method are demonstrated with an example of GPS network which has been solved previously using a classical method. Our example shows that the PSO is effective, improving efficiency by 19.2% over the classical method.
Optimal hydrograph separation filter to evaluate transport routines of hydrological models
NASA Astrophysics Data System (ADS)
Rimmer, Alon; Hartmann, Andreas
2014-06-01
Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to benchmark the performance of process-based hydro-geochemical (HG) models. The new HG routine can be used to quantify the degree of knowledge that the stream flow time series itself contributes to the HG analysis, using newly developed benchmark geochemistry efficiency (BGE). Results of the OHS show that the two HS fractions (“rapid” and “slow”) differ according to the HG substances which were selected. The BFImax parameter (long-term ratio of baseflow to total streamflow) ranged from 0.26 to 0.94 for SO4-2 and total suspended solids, TSS, respectively. Then, predictions of SO4-2 transport from a process-based hydrological model were benchmarked with the proposed HG routine, in order to evaluate the significance of the HG routines in the process-based model. This comparison provides valuable quality test that would not be obvious when using the traditional measures like r2 or the NSE (Nash-Sutcliffe efficiency). The process-based model resulted in r2 = 0.65 and NSE = 0.65, while the benchmark routine results were slightly lower with r2 = 0.61 and NSE = 0.58. However, the comparison between the two model resulted in obvious advantage for the process-based model with BGE = 0.15.
Yun, Jong Pil; Jeon, Yong-Ju; Choi, Doo-chul; Kim, Sang Woo
2012-05-01
We propose a new defect detection algorithm for scale-covered steel wire rods. The algorithm incorporates an adaptive wavelet filter that is designed on the basis of lattice parameterization of orthogonal wavelet bases. This approach offers the opportunity to design orthogonal wavelet filters via optimization methods. To improve the performance and the flexibility of wavelet design, we propose the use of the undecimated discrete wavelet transform, and separate design of column and row wavelet filters but with a common cost function. The coefficients of the wavelet filters are optimized by the so-called univariate dynamic encoding algorithm for searches (uDEAS), which searches the minimum value of a cost function designed to maximize the energy difference between defects and background noise. Moreover, for improved detection accuracy, we propose an enhanced double-threshold method. Experimental results for steel wire rod surface images obtained from actual steel production lines show that the proposed algorithm is effective. PMID:22561939
Adaptive Resampling Particle Filters for GPS Carrier-Phase Navigation and Collision Avoidance System
NASA Astrophysics Data System (ADS)
Hwang, Soon Sik
This dissertation addresses three problems: 1) adaptive resampling technique (ART) for Particle Filters, 2) precise relative positioning using Global Positioning System (GPS) Carrier-Phase (CP) measurements applied to nonlinear integer resolution problem for GPS CP navigation using Particle Filters, and 3) collision detection system based on GPS CP broadcasts. First, Monte Carlo filters, called Particle Filters (PF), are widely used where the system is non-linear and non-Gaussian. In real-time applications, their estimation accuracies and efficiencies are significantly affected by the number of particles and the scheduling of relocating weights and samples, the so-called resampling step. In this dissertation, the appropriate number of particles is estimated adaptively such that the error of the sample mean and variance stay in bounds. These bounds are given by the confidence interval of a normal probability distribution for a multi-variate state. Two required number of samples maintaining the mean and variance error within the bounds are derived. The time of resampling is determined when the required sample number for the variance error crosses the required sample number for the mean error. Second, the PF using GPS CP measurements with adaptive resampling is applied to precise relative navigation between two GPS antennas. In order to make use of CP measurements for navigation, the unknown number of cycles between GPS antennas, the so called integer ambiguity, should be resolved. The PF is applied to this integer ambiguity resolution problem where the relative navigation states estimation involves nonlinear observations and nonlinear dynamics equation. Using the PF, the probability density function of the states is estimated by sampling from the position and velocity space and the integer ambiguities are resolved without using the usual hypothesis tests to search for the integer ambiguity. The ART manages the number of position samples and the frequency of the resampling step for real-time kinematics GPS navigation. The experimental results demonstrate the performance of the ART and the insensitivity of the proposed approach to GPS CP cycle-slips. Third, the GPS has great potential for the development of new collision avoidance systems and is being considered for the next generation Traffic alert and Collision Avoidance System (TCAS). The current TCAS equipment, is capable of broadcasting GPS code information to nearby airplanes, and also, the collision avoidance system using the navigation information based on GPS code has been studied by researchers. In this dissertation, the aircraft collision detection system using GPS CP information is addressed. The PF with position samples is employed for the CP based relative position estimation problem and the same algorithm can be used to determine the vehicle attitude if multiple GPS antennas are used. For a reliable and enhanced collision avoidance system, three dimensional trajectories are projected using the estimates of the relative position, velocity, and the attitude. It is shown that the performance of GPS CP based collision detecting algorithm meets the accuracy requirements for a precise approach of flight for auto landing with significantly less unnecessary collision false alarms and no miss alarms.
OPTIMIZATION OF COAL PARTICLE FLOW PATTERNS IN LOW NOX BURNERS
Jost O.L. Wendt; Gregory E. Ogden; Jennifer Sinclair; Stephanus Budilarto
2001-08-20
The proposed research is directed at evaluating the effect of flame aerodynamics on NO{sub x} emissions from coal fired burners in a systematic manner. This fundamental research includes both experimental and modeling efforts being performed at the University of Arizona in collaboration with Purdue University. The objective of this effort is to develop rational design tools for optimizing low NO{sub x} burners to the kinetic emissions limit (below 0.2 lb./MMBTU). Experimental studies include both cold and hot flow evaluations of the following parameters: flame holder geometry, secondary air swirl, primary and secondary inlet air velocity, coal concentration in the primary air and coal particle size distribution. Hot flow experiments will also evaluate the effect of wall temperature on burner performance. Cold flow studies will be conducted with surrogate particles as well as pulverized coal. The cold flow furnace will be similar in size and geometry to the hot-flow furnace but will be designed to use a laser Doppler velocimeter/phase Doppler particle size analyzer. The results of these studies will be used to predict particle trajectories in the hot-flow furnace as well as to estimate the effect of flame holder geometry on furnace flow field. The hot-flow experiments will be conducted in a novel near-flame down-flow pulverized coal furnace. The furnace will be equipped with externally heated walls. Both reactors will be sized to minimize wall effects on particle flow fields. The cold-flow results will be compared with Fluent computation fluid dynamics model predictions and correlated with the hot-flow results with the overall goal of providing insight for novel low NO{sub x} burner geometry's.
Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.
2015-07-01
The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.
Templeton, Michael R; Andrews, Robert C; Hofmann, Ron
2007-06-01
This bench-scale study investigated the passage of particle-associated bacteriophage through a dual-media (anthracite-sand) filter over a complete filter cycle and the effect on subsequent ultraviolet (UV) disinfection. Two model viruses, bacteriophages MS2 and T4, were considered. The water matrix was de-chlorinated tap water with either kaolin or Aldrich humic acid (AHA) added and coagulated with alum to form floc before filtration. The turbidity of the influent flocculated water was 6.4+/-1.5 NTU. Influent and filter effluent turbidity and particle counts were measured as well as headloss across the filter media. Filter effluent samples were collected for phage enumeration during three filter cycle stages: (i) filter ripening; (ii) stable operation; and (iii) end of filter cycle. Stable filter operation was defined according to a filter effluent turbidity goal of <0.3 NTU. Influent and filter effluent samples were subsequently exposed to UV light (254 nm) at 40 mJ/cm(2) using a low pressure UV collimated beam. The study found statistically significant differences (alpha=0.05) in the quantity of particle-associated phage present in the filter effluent during the three stages of filtration. There was reduced UV disinfection efficiency due to the presence of particle-associated phage in the filter effluent in trials with bacteriophage MS2 and humic acid floc. Unfiltered influent water samples also resulted in reduced UV inactivation of phage relative to particle-free control conditions for both phages. Trends in filter effluent turbidity corresponded with breakthrough of particle-associated phage in the filter effluent. The results therefore suggest that maintenance of optimum filtration conditions upstream of UV disinfection is a critical barrier to particle-associated viruses. PMID:17433406
Repulsive Self-adaptive Acceleration Particle Swarm Optimization Simone A. Ludwig
Ludwig, Simone
Repulsive Self-adaptive Acceleration Particle Swarm Optimization Approach Simone A. Ludwig of every particle at every iteration. The velocity weights in- clude the acceleration constants as well Adaptive Particle Swarm Optimization (PSO) variants have become popular in recent years. The main idea
A spin filter polarimeter and an {alpha}-particle D-state study
Lemieux, S.K.
1993-12-31
A Spin Filter Polarimeter (SFP) which reveals populations of individual hyperfine states of nuclear spin-polarized H{sup {+-}} (or D{sup {+-}}) beams has been tested. the SFP is based on unique properties of a three-level interaction in the 2S{sub 1/2} and 2P{sub 1/2} states of the hydrogen (or deuterium) atoms, created when the polarized ion beams pick up electrons in cesium vapor. The SFP has potential for an absolute accuracy of better than 1.5%, thus it could be used for calibrating polarimeters absolutely for low energy experiments for which no nuclear polarization standard exists. Test results show that the SFP provides a quick and elegant measure of the relative hyperfine state populations in the beam. This {alpha}-particle study is a small part of a larger project studying the deuteron-deuteron configuration of the {alpha}-particle wave function. The differential cross section and tensor analyzing powers (TAP) were measured for the {sup 50}Ti({bar d},{alpha}){sup 48}Sc reaction to the J{sup {pi}} = 7{sup +} state in {sup 48}Sc at E{sub x} = 1.097 MeV and compared with exact finite-range distorted-wave Born approximation (DWBA) calculations. The DWBA calculations use realistic {alpha}-particle wave functions generated from variational Monte-Carlo calculations.
Deriche, Rachid; Calder, Jeff; Descoteaux, Maxime
2009-08-01
Diffusion MRI has become an established research tool for the investigation of tissue structure and orientation. Since its inception, Diffusion MRI has expanded considerably to include a number of variations such as diffusion tensor imaging (DTI), diffusion spectrum imaging (DSI) and Q-ball imaging (QBI). The acquisition and analysis of such data is very challenging due to its complexity. Recently, an exciting new Kalman filtering framework has been proposed for DTI and QBI reconstructions in real-time during the repetition time (TR) of the acquisition sequence. In this article, we first revisit and thoroughly analyze this approach and show it is actually sub-optimal and not recursively minimizing the intended criterion due to the Laplace-Beltrami regularization term. Then, we propose a new approach that implements the QBI reconstruction algorithm in real-time using a fast and robust Laplace-Beltrami regularization without sacrificing the optimality of the Kalman filter. We demonstrate that our method solves the correct minimization problem at each iteration and recursively provides the optimal QBI solution. We validate with real QBI data that our proposed real-time method is equivalent in terms of QBI estimation accuracy to the standard offline processing techniques and outperforms the existing solution. Last, we propose a fast algorithm to recursively compute gradient orientation sets whose partial subsets are almost uniform and show that it can also be applied to the problem of efficiently ordering an existing point-set of any size. This work enables a clinician to start an acquisition with just the minimum number of gradient directions and an initial estimate of the orientation distribution functions (ODF) and then the next gradient directions and ODF estimates can be recursively and optimally determined, allowing the acquisition to be stopped as soon as desired or at any iteration with the optimal ODF estimates. This opens new and interesting opportunities for real-time feedback for clinicians during an acquisition and also for researchers investigating into optimal diffusion orientation sets and real-time fiber tracking and connectivity mapping. PMID:19586794
Rod-filter-field optimization of the J-PARC RF-driven H- ion source
NASA Astrophysics Data System (ADS)
Ueno, A.; Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.
2015-04-01
In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H- ion beam of 60mA within normalized emittances of 1.5?mm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500?s×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H- ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H- ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM's gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H- ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM's cross-section (magnetmotive force) was indispensable for easy operation with the temperature (TPE) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM's cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for TPE around 60°C.
A particle filter to reconstruct a free-surface flow from a depth camera
Combès, Benoit; Guibert, Anthony; Mémin, Etienne
2016-01-01
We investigate the combined use of a Kinect depth sensor and of a stochastic data assimilation method to recover free-surface flows. More specifically, we use a Weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only. This particle filter accounts for model and observations errors. This data assimilation scheme is enhanced with the use of two observations instead of one classically. We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow. The robustness of the method to depth data errors and also to initial and inflow conditions is considered. We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions. Then, the performance of the Kinect sensor to capture temporal sequences of depth observations is investigated. Finally,...
Canedo-Rodriguez, Adrian; Rodriguez, Jose Manuel; Alvarez-Santos, Victor; Iglesias, Roberto; Regueiro, Carlos V.
2015-01-01
In wireless positioning systems, the transmitter's power is usually fixed. In this paper, we explore the use of varying transmission powers to increase the performance of a wireless localization system. To this extent, we have designed a robot positioning system based on wireless motes. Our motes use an inexpensive, low-power sub-1-GHz system-on-chip (CC1110) working in the 433-MHz ISM band. Our localization algorithm is based on a particle filter and infers the robot position by: (1) comparing the power received with the expected one; and (2) integrating the robot displacement. We demonstrate that the use of transmitters that vary their transmission power over time improves the performance of the wireless positioning system significantly, with respect to a system that uses fixed power transmitters. This opens the door for applications where the robot can localize itself actively by requesting the transmitters to change their power in real time. PMID:25942641
Canedo-Rodriguez, Adrian; Rodriguez, Jose Manuel; Alvarez-Santos, Victor; Iglesias, Roberto; Regueiro, Carlos V
2015-01-01
In wireless positioning systems, the transmitter's power is usually fixed. In this paper, we explore the use of varying transmission powers to increase the performance of a wireless localization system. To this extent, we have designed a robot positioning system based on wireless motes. Our motes use an inexpensive, low-power sub-1-GHz system-on-chip (CC1110) working in the 433-MHz ISM band. Our localization algorithm is based on a particle filter and infers the robot position by: (1) comparing the power received with the expected one; and (2) integrating the robot displacement. We demonstrate that the use of transmitters that vary their transmission power over time improves the performance of the wireless positioning system significantly, with respect to a system that uses fixed power transmitters. This opens the door for applications where the robot can localize itself actively by requesting the transmitters to change their power in real time. PMID:25942641
Indoor anti-occlusion visible light positioning systems based on particle filtering
NASA Astrophysics Data System (ADS)
Jiang, Meng; Huang, Zhitong; Li, Jianfeng; Zhang, Ruqi; Ji, Yuefeng
2015-04-01
As one of the most popular categories of mobile services, a rapid growth of indoor location-based services has been witnessed over the past decades. Indoor positioning methods based on Wi-Fi, radio-frequency identification or Bluetooth are widely commercialized; however, they have disadvantages such as low accuracy or high cost. An emerging method using visible light is under research recently. The existed visible light positioning (VLP) schemes using carrier allocation, time allocation and multiple receivers all have limitations. This paper presents a novel mechanism using particle filtering in VLP system. By this method no additional devices are needed and the occlusion problem in visible light would be alleviated which will effectively enhance the flexibility for indoor positioning.
NASA Astrophysics Data System (ADS)
Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.
2012-04-01
Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.
An Accelerated Particle Swarm Optimization Algorithm on Parametric Optimization of WEDM of Die-Steel
NASA Astrophysics Data System (ADS)
Muthukumar, V.; Suresh Babu, A.; Venkatasamy, R.; Senthil Kumar, N.
2015-01-01
This study employed Accelerated Particle Swarm Optimization (APSO) algorithm to optimize the machining parameters that lead to a maximum Material Removal Rate (MRR), minimum surface roughness and minimum kerf width values for Wire Electrical Discharge Machining (WEDM) of AISI D3 die-steel. Four machining parameters that are optimized using APSO algorithm include Pulse on-time, Pulse off-time, Gap voltage, Wire feed. The machining parameters are evaluated by Taguchi's L9 Orthogonal Array (OA). Experiments are conducted on a CNC WEDM and output responses such as material removal rate, surface roughness and kerf width are determined. The empirical relationship between control factors and output responses are established by using linear regression models using Minitab software. Finally, APSO algorithm, a nature inspired metaheuristic technique, is used to optimize the WEDM machining parameters for higher material removal rate and lower kerf width with surface roughness as constraint. The confirmation experiments carried out with the optimum conditions show that the proposed algorithm was found to be potential in finding numerous optimal input machining parameters which can fulfill wide requirements of a process engineer working in WEDM industry.
NASA Astrophysics Data System (ADS)
Gaur, Shishir; Chahar, B. R.; Graillot, Didier
2011-05-01
SummaryThis paper presents the application of the Analytic Element Method (AEM) and particle swarm optimization (PSO) based simulation-optimization model for the solution of groundwater management problems. The AEM-PSO model developed was applied to the Dore river basin, France to solve two groundwater hydraulic management problems: (1) maximum pumping from an aquifer, (2) minimum cost to develop the new pumping well system. Discharge as well as location of the pumping wells were taken as the decision variables. The influence of the piping length was examined in the total development cost for new wells. The optimal number of wells was also calculated by applying the model to different sets of wells. The constraints of the problem were identified with the help of water authority, stakeholders and officials. The AEM flow model was developed to facilitate the management model in particular, as in each iteration optimization model calls a simulation model to calculate the values of groundwater heads. The AEM-PSO model was found to be efficient in identifying the optimal location and discharge of the pumping wells. The penalty function approach was found to be valuable in PSO and also acceptable for groundwater hydraulic management problems.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Cho, Kyungmin Jacob; Turkevich, Leonid; Miller, Matthew; McKay, Roy; Grinshpun, Sergey A; Ha, KwonChul; Reponen, Tiina
2013-01-01
This study investigated differences in penetration between fibers and spherical particles through faceseal leakage of an N95 filtering facepiece respirator. Three cyclic breathing flows were generated corresponding to mean inspiratory flow rates (MIF) of 15, 30, and 85 L/min. Fibers had a mean diameter of 1 ?m and a median length of 4.9 ?m (calculated aerodynamic diameter, d(ae) = 1.73 ?m). Monodisperse polystyrene spheres with a mean physical diameter of 1.01 ?m (PSI) and 1.54 ?m (PSII) were used for comparison (calculated d(ae) = 1.05 and 1.58 ?m, respectively). Two optical particle counters simultaneously determined concentrations inside and outside the respirator. Geometric means (GMs) for filter penetration of the fibers were 0.06, 0.09, and 0.08% at MIF of 15, 30, and 85 L/min, respectively. Corresponding values for PSI were 0.07, 0.12, and 0.12%. GMs for faceseal penetration of fibers were 0.40, 0.14, and 0.09% at MIF of 15, 30, and 85 L/min, respectively. Corresponding values for PSI were 0.96, 0.41, and 0.17%. Faceseal penetration decreased with increased breathing rate for both types of particles (p ? 0.001). GMs of filter and faceseal penetration of PSII at an MIF of 30 L/min were 0.14% and 0.36%, respectively. Filter penetration and faceseal penetration of fibers were significantly lower than those of PSI (p < 0.001) and PSII (p < 0.003). This confirmed that higher penetration of PSI was not due to slightly smaller aerodynamic diameter, indicating that the shape of fibers rather than their calculated mean aerodynamic diameter is a prevailing factor on deposition mechanisms through the tested respirator. In conclusion, faceseal penetration of fibers and spherical particles decreased with increasing breathing rate, which can be explained by increased capture by impaction. Spherical particles had 2.0-2.8 times higher penetration through faceseal leaks and 1.1-1.5 higher penetration through filter media than fibers, which can be attributed to differences in interception losses. PMID:23339437
Cho, Kyungmin Jacob; Turkevich, Leonid; Miller, Matthew; McKay, Roy; Grinshpun, Sergey A.; Ha, KwonChul; Reponen, Tiina
2015-01-01
This study investigated differences in penetration between fibers and spherical particles through faceseal leakage of an N95 filtering facepiece respirator. Three cyclic breathing flows were generated corresponding to mean inspiratory flow rates (MIF) of 15, 30, and 85 L/min. Fibers had a mean diameter of 1 ?m and a median length of 4.9 ?m (calculated aerodynamic diameter, dae = 1.73 ?m). Monodisperse polystyrene spheres with a mean physical diameter of 1.01 ?m (PSI) and 1.54 ?m (PSII) were used for comparison (calculated dae = 1.05 and 1.58 ?m, respectively). Two optical particle counters simultaneously determined concentrations inside and outside the respirator. Geometric means (GMs) for filter penetration of the fibers were 0.06, 0.09, and 0.08% at MIF of 15, 30, and 85 L/min, respectively. Corresponding values for PSI were 0.07, 0.12, and 0.12%. GMs for faceseal penetration of fibers were 0.40, 0.14, and 0.09% at MIF of 15, 30, and 85 L/min, respectively. Corresponding values for PSI were 0.96, 0.41, and 0.17%. Faceseal penetration decreased with increased breathing rate for both types of particles (p ? 0.001). GMs of filter and faceseal penetration of PSII at an MIF of 30 L/min were 0.14% and 0.36%, respectively. Filter penetration and faceseal penetration of fibers were significantly lower than those of PSI (p < 0.001) and PSII (p < 0.003). This confirmed that higher penetration of PSI was not due to slightly smaller aerodynamic diameter, indicating that the shape of fibers rather than their calculated mean aerodynamic diameter is a prevailing factor on deposition mechanisms through the tested respirator. In conclusion, faceseal penetration of fibers and spherical particles decreased with increasing breathing rate, which can be explained by increased capture by impaction. Spherical particles had 2.0–2.8 times higher penetration through faceseal leaks and 1.1–1.5 higher penetration through filter media than fibers, which can be attributed to differences in interception losses. PMID:23339437
Zhou, Yi; Zhang, Shaojun; Liu, Ying; Yang, Hongsheng
2014-01-01
Industrial aquaculture wastewater contains large quantities of suspended particles that can be easily broken down physically. Introduction of macro-bio-filters, such as bivalve filter feeders, may offer the potential for treatment of fine suspended matter in industrial aquaculture wastewater. In this study, we employed two kinds of bivalve filter feeders, the Pacific oyster Crassostrea gigas and the blue mussel Mytilus galloprovincialis, to deposit suspended solids from marine fish aquaculture wastewater in flow-through systems. Results showed that the biodeposition rate of suspended particles by C. gigas (shell height: 8.67±0.99 cm) and M. galloprovincialis (shell height: 4.43±0.98 cm) was 77.84±7.77 and 6.37±0.67 mg ind?1•d?1, respectively. The total solid suspension (TSS) deposition rates of oyster and mussel treatments were 3.73±0.27 and 2.76±0.20 times higher than that of the control treatment without bivalves, respectively. The TSS deposition rates of bivalve treatments were significantly higher than the natural sedimentation rate of the control treatment (P<0.001). Furthermore, organic matter and C, N in the sediments of bivalve treatments were significantly lower than those in the sediments of the control (P<0.05). It was suggested that the filter feeders C. gigas and M. galloprovincialis had considerable potential to filter and accelerate the deposition of suspended particles from industrial aquaculture wastewater, and simultaneously yield value-added biological products. PMID:25250730
Optimization of nanoparticle core size for magnetic particle imaging
NASA Astrophysics Data System (ADS)
Ferguson, R. Matthew; Minard, Kevin R.; Krishnan, Kannan M.
2009-05-01
Magnetic particle imaging (MPI) is a powerful new research and diagnostic imaging platform that is designed to image the amount and location of superparamagnetic nanoparticles in biological tissue. Here, we present mathematical modeling results that show how MPI sensitivity and spatial resolution both depend on the size of the nanoparticle core and its other physical properties, and how imaging performance can be effectively optimized through rational core design. Modeling is performed using the properties of magnetite cores, since these are readily produced with a controllable size that facilitates quantitative imaging. Results show that very low detection thresholds (of a few nanograms Fe 3O 4) and sub-millimeter spatial resolution are possible with MPI.
Field, Matthew A.; Cho, Vicky
2015-01-01
A diversity of tools is available for identification of variants from genome sequence data. Given the current complexity of incorporating external software into a genome analysis infrastructure, a tendency exists to rely on the results from a single tool alone. The quality of the output variant calls is highly variable however, depending on factors such as sequence library quality as well as the choice of short-read aligner, variant caller, and variant caller filtering strategy. Here we present a two-part study first using the high quality ‘genome in a bottle’ reference set to demonstrate the significant impact the choice of aligner, variant caller, and variant caller filtering strategy has on overall variant call quality and further how certain variant callers outperform others with increased sample contamination, an important consideration when analyzing sequenced cancer samples. This analysis confirms previous work showing that combining variant calls of multiple tools results in the best quality resultant variant set, for either specificity or sensitivity, depending on whether the intersection or union, of all variant calls is used respectively. Second, we analyze a melanoma cell line derived from a control lymphocyte sample to determine whether software choices affect the detection of clinically important melanoma risk-factor variants finding that only one of the three such variants is unanimously detected under all conditions. Finally, we describe a cogent strategy for implementing a clinical variant detection pipeline; a strategy that requires careful software selection, variant caller filtering optimizing, and combined variant calls in order to effectively minimize false negative variants. While implementing such features represents an increase in complexity and computation the results offer indisputable improvements in data quality. PMID:26600436
Diesel passenger car PM emissions: From Euro 1 to Euro 4 with particle filter
NASA Astrophysics Data System (ADS)
Tzamkiozis, Theodoros; Ntziachristos, Leonidas; Samaras, Zissis
2010-03-01
This paper examines the impact of the emission control and fuel technology development on the emissions of gaseous and, in particular, PM pollutants from diesel passenger cars. Three cars in five configurations in total were measured, and covered the range from Euro 1 to Euro 4 standards. The emission control ranged from no aftertreatment in the Euro 1 case, an oxidation catalyst in Euro 2, two oxidation catalysts and exhaust gas recirculation in Euro 3 and Euro 4, while a catalyzed diesel particle filter (DPF) fitted in the Euro 4 car led to a Euro 4 + DPF configuration. Both certification test and real-world driving cycles were employed. The results showed that CO and HC emissions were much lower than the emission standard over the hot-start real-world cycles. However, vehicle technologies from Euro 2 to Euro 4 exceeded the NOx and PM emission levels over at least one real-world cycle. The NOx emission level reached up to 3.6 times the certification level in case of the Euro 4 car. PM were up to 40% and 60% higher than certification level for the Euro 2 and Euro 3 cars, while the Euro 4 car emitted close or slightly below the certification level over the real-world driving cycles. PM mass reductions from Euro 1 to Euro 4 were associated with a relevant decrease in the total particle number, in particular over the certification test. This was not followed by a respective reduction in the solid particle number which remained rather constant between the four technologies at 0.86 × 10 14 km -1 (coefficient of variation 9%). As a result, the ratio of solid vs. total particle number ranged from ˜50% in Euro 1-100% in Euro 4. A significant reduction of more than three orders of magnitude in solid particle number is achieved with the introduction of the DPF. However, the potential for nucleation mode formation at high speed from the DPF car is an issue that needs to be considered in the over all assessment of its environmental benefit. Finally, comparison of the mobility and aerodynamic diameters of airborne particles led to fractal dimensions dropping from 2.60 (Euro 1) to 2.51 (Euro 4), denoting a more loose structure with improving technology.
Video object tracking using improved chamfer matching and condensation particle filter
NASA Astrophysics Data System (ADS)
Wu, Tao; Ding, Xiaoqing; Wang, Shengjin; Wang, Kongqiao
2008-02-01
Object tracking is an essential problem in the field of video and image processing. Although tracking algorithms working on gray video are convenient in actual applications, they are more difficult to be developed than those using color features, since less information is taken into account. Few researches have been dedicated to tracking object using edge information. In this paper, we proposed a novel video tracking algorithm based on edge information for gray videos. This method adopts the combination of a condensation particle filter and an improved chamfer matching. The improved chamfer matching is rotation invariant and capable of estimating the shift between an observed image patch and a template by an orientation distance transform. A modified discriminative likelihood measurement method that focuses on the difference is adopted. These values are normalized and used as the weights of particles which predict and track the object. Experiment results show that our modifications to chamfer matching improve its performance in video tracking problem. And the algorithm is stable, robust, and can effectively handle rotation distortion. Further work can be done on updating the template to adapt to significant viewpoint and scale changes of the appearance of the object during the tracking process.
An optimized strain demodulation method for PZT driven fiber Fabry-Perot tunable filter
NASA Astrophysics Data System (ADS)
Sheng, Wenjuan; Peng, G. D.; Liu, Yang; Yang, Ning
2015-08-01
An optimized strain-demodulation-method based on piezo-electrical transducer (PZT) driven fiber Fabry-Perot (FFP) filter is proposed and experimentally demonstrated. Using a parallel processing mode to drive the PZT continuously, the hysteresis effect is eliminated, and the system demodulation rate is increased. Furthermore, an AC-DC compensation method is developed to address the intrinsic nonlinear relationship between the displacement and voltage of PZT. The experimental results show that the actual demodulation rate is improved from 15 Hz to 30 Hz, the random error of the strain measurement is decreased by 95%, and the deviation between the test values after compensation and the theoretical values is less than 1 pm/??.
NASA Astrophysics Data System (ADS)
Takeda, Yasuhiko; Iizuka, Hideo; Ito, Tadashi; Mizuno, Shintaro; Hasegawa, Kazuo; Ichikawa, Tadashi; Ito, Hiroshi; Kajino, Tsutomu; Higuchi, Kazuo; Ichiki, Akihisa; Motohiro, Tomoyoshi
2015-08-01
We have theoretically investigated photovoltaic cells used under the illumination condition of monochromatic light incident from a particular direction, which is very different from that for solar cells under natural sunlight, using detailed balance modeling. A multilayer bandpass filter formed on the surface of the cell has been found to trap the light generated by radiative recombination inside the cell, reduce emission from the cell, and consequently improve conversion efficiency. The light trapping mechanism is interpreted in terms of a one-dimensional photonic crystal, and the design guide to optimize the multilayer structure has been clarified. For obliquely incident illumination, as well as normal incidence, a significant light trapping effect has been achieved, although the emission patterns are extremely different from each other depending on the incident directions.
The absorptivity and imaginary index of refraction for carbon and methylene blue particles were inferred from the photoacoustic spectra of samples collected on Teflon filter substrates. Three models of varying complexity were developed to describe the photoacoustic signal as a fu...
Neklyudov, I M; Fedorova, L I; Poltinin, P Ya
2013-01-01
The main purpose of research is to determine the influence by the small dispersive coal dust particles of the different fractional consistence on the technical characteristics of the vertical iodine air filter at nuclear power plant. The research on the transport properties of the small dispersive coal dust particles in the granular filtering medium of absorber in the vertical iodine air filter is completed in the case, when the modeled aerodynamic conditions are similar to the real aerodynamic conditions. It is shown that the appearance of the different fractional consistence of small dispersive coal dust particles with the decreasing dimensions down to the micro and nano sizes at the action of the air dust aerosol stream normally results in a significant change of distribution of the small dispersive coal dust particles masses in the granular filtering medium of an absorber in the vertical iodine air filter, changing the vertical iodine air filter aerodynamic characteristics. The precise characterization of...
Optimizing magnetite nanoparticles for mass sensitivity in magnetic particle imaging
Ferguson, R. Matthew; Minard, Kevin R.; Khandhar, Amit P.; Krishnan, Kannan M.
2011-01-01
Purpose: Magnetic particle imaging (MPI), using magnetite nanoparticles (MNPs) as tracer material, shows great promise as a platform for fast tomographic imaging. To date, the magnetic properties of MNPs used in imaging have not been optimized. As nanoparticle magnetism shows strong size dependence, the authors explore how varying MNP size impacts imaging performance in order to determine optimal MNP characteristics for MPI at any driving field frequency f0. Methods: Monodisperse MNPs of varying size were synthesized and their magnetic properties characterized. Their MPI response was measured experimentally using a custom-built MPI transceiver designed to detect the third harmonic of MNP magnetization. The driving field amplitude H0=6 mT ?0?1 and frequency f0=250 kHz were chosen to be suitable for imaging small animals. Experimental results were interpreted using a model of dynamic MNP magnetization that is based on the Langevin theory of superparamagnetism and accounts for sample size distribution and size-dependent magnetic relaxation. Results: The experimental results show a clear variation in the MPI signal intensity as a function of MNP diameter that is in agreement with simulated results. A maximum in the plot of MPI signal vs MNP size indicates there is a particular size that is optimal for the chosen f0. Conclusions: The authors observed that MNPs 15 nm in diameter generate maximum signal amplitude in MPI experiments at 250 kHz. The authors expect the physical basis for this result, the change in magnetic relaxation with MNP size, will impact MPI under other experimental conditions. PMID:21520874
Yu, Xiaobing; Cao, Jie; Shan, Haiyan; Zhu, Li; Guo, Jun
2014-01-01
Particle swarm optimization (PSO) and differential evolution (DE) are both efficient and powerful population-based stochastic search techniques for solving optimization problems, which have been widely applied in many scientific and engineering fields. Unfortunately, both of them can easily fly into local optima and lack the ability of jumping out of local optima. A novel adaptive hybrid algorithm based on PSO and DE (HPSO-DE) is formulated by developing a balanced parameter between PSO and DE. Adaptive mutation is carried out on current population when the population clusters around local optima. The HPSO-DE enjoys the advantages of PSO and DE and maintains diversity of the population. Compared with PSO, DE, and their variants, the performance of HPSO-DE is competitive. The balanced parameter sensitivity is discussed in detail. PMID:24688370
A Bayesian Interpretation of the Particle Swarm Optimization and Its Kernel Extension
Andras, Peter
2012-01-01
Particle swarm optimization is a popular method for solving difficult optimization problems. There have been attempts to formulate the method in formal probabilistic or stochastic terms (e.g. bare bones particle swarm) with the aim to achieve more generality and explain the practical behavior of the method. Here we present a Bayesian interpretation of the particle swarm optimization. This interpretation provides a formal framework for incorporation of prior knowledge about the problem that is being solved. Furthermore, it also allows to extend the particle optimization method through the use of kernel functions that represent the intermediary transformation of the data into a different space where the optimization problem is expected to be easier to be resolved–such transformation can be seen as a form of prior knowledge about the nature of the optimization problem. We derive from the general Bayesian formulation the commonly used particle swarm methods as particular cases. PMID:23144937
A Bayesian interpretation of the particle swarm optimization and its kernel extension.
Andras, Peter
2012-01-01
Particle swarm optimization is a popular method for solving difficult optimization problems. There have been attempts to formulate the method in formal probabilistic or stochastic terms (e.g. bare bones particle swarm) with the aim to achieve more generality and explain the practical behavior of the method. Here we present a Bayesian interpretation of the particle swarm optimization. This interpretation provides a formal framework for incorporation of prior knowledge about the problem that is being solved. Furthermore, it also allows to extend the particle optimization method through the use of kernel functions that represent the intermediary transformation of the data into a different space where the optimization problem is expected to be easier to be resolved-such transformation can be seen as a form of prior knowledge about the nature of the optimization problem. We derive from the general Bayesian formulation the commonly used particle swarm methods as particular cases. PMID:23144937
Moon, Un-Ku
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS--II: EXPRESS BRIEFS, VOL. 51, NO. 3, MARCH 2004 105 Continuous-Time Filter Design Optimized for Reduced Die Area Charles Myers, Student Member, IEEE, Brandon for distributing capacitor and resistor area to optimally reduce die area in a given continuous-time filter design
Park, Jae Hong; Yoon, Ki Young; Na, Hyungjoo; Kim, Yang Seon; Hwang, Jungho; Kim, Jongbaeg; Yoon, Young Hun
2011-09-01
We grew multi-walled carbon nanotubes (MWCNTs) on a glass fiber air filter using thermal chemical vapor deposition (CVD) after the filter was catalytically activated with a spark discharge. After the CNT deposition, filtration and antibacterial tests were performed with the filters. Potassium chloride (KCl) particles (<1 ?m) were used as the test aerosol particles, and their number concentration was measured using a scanning mobility particle sizer. Antibacterial tests were performed using the colony counting method, and Escherichia coli (E. coli) was used as the test bacteria. The results showed that the CNT deposition increased the filtration efficiency of nano and submicron-sized particles, but did not increase the pressure drop across the filter. When a pristine glass fiber filter that had no CNTs was used, the particle filtration efficiencies at particle sizes under 30 nm and near 500 nm were 48.5% and 46.8%, respectively. However, the efficiencies increased to 64.3% and 60.2%, respectively, when the CNT-deposited filter was used. The reduction in the number of viable cells was determined by counting the colony forming units (CFU) of each test filter after contact with the cells. The pristine glass fiber filter was used as a control, and 83.7% of the E. coli were inactivated on the CNT-deposited filter. PMID:21767869
NASA Technical Reports Server (NTRS)
Stewart, Elwood C.
1961-01-01
The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.
NASA Astrophysics Data System (ADS)
Bellini, Nicola; Gu, Yu; Amato, Lorenzo; Eaton, Shane; Cerullo, Giulio; Osellame, Roberto
2012-03-01
We report on the integration of a size-based three-dimensional filter, with micrometer-sized pores, in a commercial microfluidic chip. The filter is fabricated inside an already sealed microfluidic channel using the unique capabilities of two-photon polymerization. This direct-write technique enables integration of the filter by post-processing in a chip that has been fabricated by standard technologies. The filter is located at the intersection of two channels in order to control the amount of flow passing through the filter. Tests with a suspension of 3-ìm polystyrene spheres in a Rhodamine 6G solution show that 100% of the spheres are stopped, while the fluorescent molecules are transmitted through the filter. We demonstrate operation up to a period of 25 minutes without any evidence of clogging. Moreover, the filter can be cleaned and reused by reversing the flow.
Diesel particle filter and fuel effects on heavy-duty diesel engine emissions.
Ratcliff, Matthew A; Dane, A John; Williams, Aaron; Ireland, John; Luecke, Jon; McCormick, Robert L; Voorhees, Kent J
2010-11-01
The impacts of biodiesel and a continuously regenerated (catalyzed) diesel particle filter (DPF) on the emissions of volatile unburned hydrocarbons, carbonyls, and particle associated polycyclic aromatic hydrocarbons (PAH) and nitro-PAH, were investigated. Experiments were conducted on a 5.9 L Cummins ISB, heavy-duty diesel engine using certification ultra-low-sulfur diesel (ULSD, S ? 15 ppm), soy biodiesel (B100), and a 20% blend thereof (B20). Against the ULSD baseline, B20 and B100 reduced engine-out emissions of measured unburned volatile hydrocarbons and PM associated PAH and nitro-PAH by significant percentages (40% or more for B20 and higher percentage for B100). However, emissions of benzene were unaffected by the presence of biodiesel and emissions of naphthalene actually increased for B100. This suggests that the unsaturated FAME in soy-biodiesel can react to form aromatic rings in the diesel combustion environment. Methyl acrylate and methyl 3-butanoate were observed as significant species in the exhaust for B20 and B100 and may serve as markers of the presence of biodiesel in the fuel. The DPF was highly effective at converting gaseous hydrocarbons and PM associated PAH and total nitro-PAH. However, conversion of 1-nitropyrene by the DPF was less than 50% for all fuels. Blending of biodiesel caused a slight reduction in engine-out emissions of acrolein, but otherwise had little effect on carbonyl emissions. The DPF was highly effective for conversion of carbonyls, with the exception of formaldehyde. Formaldehyde emissions were increased by the DPF for ULSD and B20. PMID:20886845
Usefulness of Nonlinear Interpolation and Particle Filter in Zigbee Indoor Positioning
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Wu, Helei; Uradzi?ski, Marcin
2014-12-01
The key to fingerprint positioning algorithm is establishing effective fingerprint information database based on different reference nodes of received signal strength indicator (RSSI). Traditional method is to set the location area calibration multiple information sampling points, and collection of a large number sample data what is very time consuming. With Zigbee sensor networks as platform, considering the influence of positioning signal interference, we proposed an improved algorithm of getting virtual database based on polynomial interpolation, while the pre-estimated result was disposed by particle filter. Experimental result shows that this method can generate a quick, simple fine-grained localization information database, and improve the positioning accuracy at the same time. Kluczem do algorytmu pozycjonowania wykorzystuj?cego metod? fi ngerprinting jest ustanowienie skutecznej bazy danych na podstawie informacji z radiowych nadajników referencyjnych przy wykorzystaniu wska?nika mocy odbieranego sygna?u (RSSI). Tradycyjna metoda oparta jest na przeprowadzeniu kalibracji obszaru lokalizacji na podstawie wielu punktów pomiarowych i otrzymaniu du?ej liczby próbek, co jest bardzo czasoch?onne.
Qiu, Liping; Zhang, Shoubin; Wang, Guangwei; Du, Mao'an
2010-10-01
The performance and nitrification properties of three BAFs, with ceramic, zeolite and carbonate media, respectively, were investigated to evaluate the feasibility of employing these materials as biological aerated filter media. All three BAFs shown a promising COD and SS removal performance, while influent pH was 6.5-8.1, air-liquid ratio was 5:1 and HRT was 1.25-2.5 h, respectively. Ammonia removal in BAFs was inhibited when organic and ammonia nitrogen loading were increased, but promoted effectively with the increase pH value. Zeolite and carbonate were more suitable for nitrification than ceramic particle when influent pH below 6.5. It is feasible to employ these media in BAF and adequate bed volume has to be supplied to satisfy the requirement of removal COD, SS and ammonia nitrogen simultaneously in a biofilter. The carbonate with a strong buffer capacity is more suitable to treat the wastewater with variable or lower pH. PMID:20483593
Particle filter with a mode tracker for visual tracking across illumination changes.
Das, Samarjit; Kale, Amit; Vaswani, Namrata
2012-04-01
In this correspondence, our goal is to develop a visual tracking algorithm that is able to track moving objects in the presence of illumination variations in the scene and that is robust to occlusions. We treat the illumination and motion ( x-y translation and scale) parameters as the unknown "state" sequence. The observation is the entire image, and the observation model allows for occasional occlusions (modeled as outliers). The nonlinearity and multimodality of the observation model necessitate the use of a particle filter (PF). Due to the inclusion of illumination parameters, the state dimension increases, thus making regular PFs impractically expensive. We show that the recently proposed approach using a PF with a mode tracker can be used here since, even in most occlusion cases, the posterior of illumination conditioned on motion and the previous state is unimodal and quite narrow. The key idea is to importance sample on the motion states while approximating importance sampling by posterior mode tracking for estimating illumination. Experiments demonstrate the advantage of the proposed algorithm over existing PF-based approaches for various face and vehicle tracking. We are also able to detect illumination model changes, e.g., those due to transition from shadow to sunlight or vice versa by using the generalized expected log-likelihood statistics and successfully compensate for it without ever loosing track. PMID:22067364
Incorporating advanced language models into the P300 speller using particle filtering
NASA Astrophysics Data System (ADS)
Speier, W.; Arnold, C. W.; Deshpande, A.; Knall, J.; Pouratian, N.
2015-08-01
Objective. The P300 speller is a common brain-computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach. Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main result. This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance. These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance.
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING
Bayard, David S.; Schumitzky, Alan
2009-01-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches. PMID:21132112
Wakai, Nobuhide; Sumida, Iori; Otani, Yuki; Suzuki, Osamu; Seo, Yuji; Isohashi, Fumiaki; Yoshioka, Yasuo; Ogawa, Kazuhiko; Hasegawa, Masatoshi
2015-05-15
Purpose: The authors sought to determine the optimal collimator leaf margins which minimize normal tissue dose while achieving high conformity and to evaluate differences between the use of a flattening filter-free (FFF) beam and a flattening-filtered (FF) beam. Methods: Sixteen lung cancer patients scheduled for stereotactic body radiotherapy underwent treatment planning for a 7 MV FFF and a 6 MV FF beams to the planning target volume (PTV) with a range of leaf margins (?3 to 3 mm). Forty grays per four fractions were prescribed as a PTV D95. For PTV, the heterogeneity index (HI), conformity index, modified gradient index (GI), defined as the 50% isodose volume divided by target volume, maximum dose (Dmax), and mean dose (Dmean) were calculated. Mean lung dose (MLD), V20 Gy, and V5 Gy for the lung (defined as the volumes of lung receiving at least 20 and 5 Gy), mean heart dose, and Dmax to the spinal cord were measured as doses to organs at risk (OARs). Paired t-tests were used for statistical analysis. Results: HI was inversely related to changes in leaf margin. Conformity index and modified GI initially decreased as leaf margin width increased. After reaching a minimum, the two values then increased as leaf margin increased (“V” shape). The optimal leaf margins for conformity index and modified GI were ?1.1 ± 0.3 mm (mean ± 1 SD) and ?0.2 ± 0.9 mm, respectively, for 7 MV FFF compared to ?1.0 ± 0.4 and ?0.3 ± 0.9 mm, respectively, for 6 MV FF. Dmax and Dmean for 7 MV FFF were higher than those for 6 MV FF by 3.6% and 1.7%, respectively. There was a positive correlation between the ratios of HI, Dmax, and Dmean for 7 MV FFF to those for 6 MV FF and PTV size (R = 0.767, 0.809, and 0.643, respectively). The differences in MLD, V20 Gy, and V5 Gy for lung between FFF and FF beams were negligible. The optimal leaf margins for MLD, V20 Gy, and V5 Gy for lung were ?0.9 ± 0.6, ?1.1 ± 0.8, and ?2.1 ± 1.2 mm, respectively, for 7 MV FFF compared to ?0.9 ± 0.6, ?1.1 ± 0.8, and ?2.2 ± 1.3 mm, respectively, for 6 MV FF. With the heart inside the radiation field, the mean heart dose showed a V-shaped relationship with leaf margins. The optimal leaf margins were ?1.0 ± 0.6 mm for both beams. Dmax to the spinal cord showed no clear trend for changes in leaf margin. Conclusions: The differences in doses to OARs between FFF and FF beams were negligible. Conformity index, modified GI, MLD, lung V20 Gy, lung V5 Gy, and mean heart dose showed a V-shaped relationship with leaf margins. There were no significant differences in optimal leaf margins to minimize these parameters between both FFF and FF beams. The authors’ results suggest that a leaf margin of ?1 mm achieves high conformity and minimizes doses to OARs for both FFF and FF beams.
Cosmological parameter estimation using Particle Swarm Optimization (PSO)
Jayanti Prasad; Tarun Souradeep
2012-07-02
Obtaining the set of cosmological parameters consistent with observational data is an important exercise in current cosmological research. It involves finding the global maximum of the likelihood function in the multi-dimensional parameter space. Currently sampling based methods, which are in general stochastic in nature, like Markov-Chain Monte Carlo(MCMC), are being commonly used for parameter estimation. The beauty of stochastic methods is that the computational cost grows, at the most, linearly in place of exponentially (as in grid based approaches) with the dimensionality of the search space. MCMC methods sample the full joint probability distribution (posterior) from which one and two dimensional probability distributions, best fit (average) values of parameters and then error bars can be computed. In the present work we demonstrate the application of another stochastic method, named Particle Swarm Optimization (PSO), that is widely used in the field of engineering and artificial intelligence, for cosmological parameter estimation from WMAP seven years data. We find that there is a good agreement between the values of the best fit parameters obtained from PSO and publicly available code COSMOMC. However, there is a slight disagreement between error bars mainly due to the fact that errors are computed differently in PSO. Apart from presenting the results of our exercise, we also discuss the merits of PSO and explain its usefulness in more extensive search in higher dimensional parameter space.
OPTIMIZATION OF COAL PARTICLE FLOW PATTERNS IN LOW NOX BURNERS
Jost O.L. Wendt; Gregory E. Ogden; Jennifer Sinclair; Stephanus Budilarto
2001-09-04
It is well understood that the stability of axial diffusion flames is dependent on the mixing behavior of the fuel and combustion air streams. Combustion aerodynamic texts typically describe flame stability and transitions from laminar diffusion flames to fully developed turbulent flames as a function of increasing jet velocity. Turbulent diffusion flame stability is greatly influenced by recirculation eddies that transport hot combustion gases back to the burner nozzle. This recirculation enhances mixing and heats the incoming gas streams. Models describing these recirculation eddies utilize conservation of momentum and mass assumptions. Increasing the mass flow rate of either fuel or combustion air increases both the jet velocity and momentum for a fixed burner configuration. Thus, differentiating between gas velocity and momentum is important when evaluating flame stability under various operating conditions. The research efforts described herein are part of an ongoing project directed at evaluating the effect of flame aerodynamics on NO{sub x} emissions from coal fired burners in a systematic manner. This research includes both experimental and modeling efforts being performed at the University of Arizona in collaboration with Purdue University. The objective of this effort is to develop rational design tools for optimizing low NO{sub x} burners. Experimental studies include both cold-and hot-flow evaluations of the following parameters: primary and secondary inlet air velocity, coal concentration in the primary air, coal particle size distribution and flame holder geometry. Hot-flow experiments will also evaluate the effect of wall temperature on burner performance.
Particle swarm optimization algorithm based low cost magnetometer calibration
NASA Astrophysics Data System (ADS)
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
Gupta, A.
1992-01-01
The effect of humidity, particle hygroscopicity and size on the mass loading capacity of glass fiber HEPA filters has been studied. At humidifies above the deliquescent point, the pressure drop across the HEPA filter increased non-linearly with the areal loading density (mass collected/filtration area) of NaCl aerosol, thus significantly reducing the mass loading capacity of the filter compared to dry hygroscopic or non-hygroscopic particle mass loadings. The specific cake resistance, K{sub 2}, has been computed for different test conditions and used as a measure of the mass loading capacity. K. was found to decrease with increasing humidity for the non-hygroscopic aluminum oxide particles and the hygroscopic NaCl particles (at humidities below the deliquescent point). It is postulated that an increase in humidity leads to the formation of a more open particulate cake which lowers the pressure drop for a given mass loading. A formula for predicting K{sub 2} for lognormally distributed aerosols (parameters obtained from impactor data) is derived. The resistance factor, R, calculated using this formula was compared to the theoretical R calculated using the Rudnick-Happel expression. For the non-hygroscopic aluminum oxide the agreement was good but for the hygroscopic sodium chloride, due to large variation in the cake porosity estimates, the agreement was poor.
Gupta, A.
1992-09-01
The effect of humidity, particle hygroscopicity and size on the mass loading capacity of glass fiber HEPA filters has been studied. At humidifies above the deliquescent point, the pressure drop across the HEPA filter increased non-linearly with the areal loading density (mass collected/filtration area) of NaCl aerosol, thus significantly reducing the mass loading capacity of the filter compared to dry hygroscopic or non-hygroscopic particle mass loadings. The specific cake resistance, K{sub 2}, has been computed for different test conditions and used as a measure of the mass loading capacity. K. was found to decrease with increasing humidity for the non-hygroscopic aluminum oxide particles and the hygroscopic NaCl particles (at humidities below the deliquescent point). It is postulated that an increase in humidity leads to the formation of a more open particulate cake which lowers the pressure drop for a given mass loading. A formula for predicting K{sub 2} for lognormally distributed aerosols (parameters obtained from impactor data) is derived. The resistance factor, R, calculated using this formula was compared to the theoretical R calculated using the Rudnick-Happel expression. For the non-hygroscopic aluminum oxide the agreement was good but for the hygroscopic sodium chloride, due to large variation in the cake porosity estimates, the agreement was poor.
Optimal Location and Number of Access Points based on Ray-Tracing and Particle Swarm
Myung, Noh-Hoon
Optimal Location and Number of Access Points based on Ray-Tracing and Particle Swarm Optimization such as the ray tracing, which considers effectively geometrical structures, could be employed. To search for optimal APs, it is required to consider not only the ray tracing method which analyze the indoor radio
Sharma, Gaurav
. Vrhel z Abstract In this correspondence, the problem of designing color scanning filters for multiilluminants can be posed as a colorfilter design problem. Reference [1] described a method of computing transmittances of filters that minimized the minimummeansquared tristimulus error. The design of color
Hafnium and neodymium isotope composition of seawater and filtered particles from the Southern Ocean
NASA Astrophysics Data System (ADS)
Stichel, T.; Frank, M.; Haley, B. A.; Rickli, J.; Venchiarutti, C.
2009-12-01
Radiogenic hafnium (Hf) and neodymium (Nd) isotopes have been used as tracers for past continental weathering regimes and ocean circulation. To date, however, there are only very few data available on dissolved Hf isotope compositions in present-day seawater and there is a complete lack of particulate data. During expedition ANTXXIV/3 (February to April 2008) we collected particulate samples (> 0.8 µm), which were obtained by filtrations of 270-700 liters of water. The samples were separated from the filters, completely dissolved, and purified for Nd and Hf isotope determination by TIMS and MC-ICPMS, respectively. In addition, we collected filtered (0.45 µm) seawater samples (20-120 liters) to determine the dissolved isotopic composition of Hf and Nd. The Hf isotope composition of the particulate fraction in the Drake Passage ranged from 0 to -28 ?Hf and is thus similar to that observed in core top sediments from the entire Southern Ocean in a previous study. The most unradiogenic and isotopically homogenous Hf isotope compositions in our study were found near the Antarctic Peninsula. Most of the stations north of the Southern Antarctic Circumpolar Front (SACC) show a large variation in ?Hf between 0 and -23 within the water column of one station and between the stations. The locations at which these Hf isotope compositions were measured are mostly far away from the potential source areas. Nd, in contrast, was nearly absent throughout the entire sample set and the only measurable ?Nd data ranged from 0 to -7, which is in good agreement with the sediment data in that area. The dissolved seawater isotopic compositions of both Hf and Nd show only minor variance (?Hf = 4.2 to 4.7 and ?Nd = -8.8 to -7.6, respectively). These patterns in Hf isotopes and the nearly complete absence of Nd indicates that the particulate fraction does not contain a lot of terrigeneous material but is almost entirely dominated by biogenic opal. The homogenous and relatively radiogenic Hf isotope values in the dissolved fraction are interpreted as a result of large scale water mass mixing whereas the highly unradiongenic values observed in the particles more likely represent scavenged Hf released from physical weathering on the Antarctic continent. Our data therefore suggest a high scavenging efficiency of dissolved Hf onto opal, which is not observed for Nd. These results imply that the Southern Ocean is an efficient sink for dissolved Hf resulting in a very short residence of Hf in the Southern Ocean.
Guan, Fada; Bronk, Lawrence; Titt, Uwe; Lin, Steven H; Mirkovic, Dragan; Kerr, Matthew D; Zhu, X Ronald; Dinh, Jeffrey; Sobieski, Mary; Stephan, Clifford; Peeler, Christopher R; Taleei, Reza; Mohan, Radhe; Grosshans, David R
2015-01-01
The physical properties of particles used in radiation therapy, such as protons, have been well characterized, and their dose distributions are superior to photon-based treatments. However, proton therapy may also have inherent biologic advantages that have not been capitalized on. Unlike photon beams, the linear energy transfer (LET) and hence biologic effectiveness of particle beams varies along the beam path. Selective placement of areas of high effectiveness could enhance tumor cell kill and simultaneously spare normal tissues. However, previous methods for mapping spatial variations in biologic effectiveness are time-consuming and often yield inconsistent results with large uncertainties. Thus the data needed to accurately model relative biological effectiveness to guide novel treatment planning approaches are limited. We used Monte Carlo modeling and high-content automated clonogenic survival assays to spatially map the biologic effectiveness of scanned proton beams with high accuracy and throughput while minimizing biological uncertainties. We found that the relationship between cell kill, dose, and LET, is complex and non-unique. Measured biologic effects were substantially greater than in most previous reports, and non-linear surviving fraction response was observed even for the highest LET values. Extension of this approach could generate data needed to optimize proton therapy plans incorporating variable RBE. PMID:25984967
Guan, Fada; Bronk, Lawrence; Titt, Uwe; Lin, Steven H.; Mirkovic, Dragan; Kerr, Matthew D.; Zhu, X. Ronald; Dinh, Jeffrey; Sobieski, Mary; Stephan, Clifford; Peeler, Christopher R.; Taleei, Reza; Mohan, Radhe; Grosshans, David R.
2015-01-01
The physical properties of particles used in radiation therapy, such as protons, have been well characterized, and their dose distributions are superior to photon-based treatments. However, proton therapy may also have inherent biologic advantages that have not been capitalized on. Unlike photon beams, the linear energy transfer (LET) and hence biologic effectiveness of particle beams varies along the beam path. Selective placement of areas of high effectiveness could enhance tumor cell kill and simultaneously spare normal tissues. However, previous methods for mapping spatial variations in biologic effectiveness are time-consuming and often yield inconsistent results with large uncertainties. Thus the data needed to accurately model relative biological effectiveness to guide novel treatment planning approaches are limited. We used Monte Carlo modeling and high-content automated clonogenic survival assays to spatially map the biologic effectiveness of scanned proton beams with high accuracy and throughput while minimizing biological uncertainties. We found that the relationship between cell kill, dose, and LET, is complex and non-unique. Measured biologic effects were substantially greater than in most previous reports, and non-linear surviving fraction response was observed even for the highest LET values. Extension of this approach could generate data needed to optimize proton therapy plans incorporating variable RBE. PMID:25984967
Barone, Teresa L; Storey, John M E; Domingo, Norberto
2010-08-01
A field-aged, passive diesel particulate filter (DPF) used in a school bus retrofit program was evaluated for emissions of particle mass and number concentration before, during, and after regeneration. For the particle mass measurements, filter samples were collected for gravimetric analysis with a partial flow sampling system, which sampled proportionally to the exhaust flow. A condensation particle counter and scanning mobility particle sizer measured total number concentration and number-size distributions, respectively. The results of the evaluation show that the number concentration emissions decreased as the DPF became loaded with soot. However, after soot removal by regeneration, the number concentration emissions were approximately 20 times greater, which suggests the importance of the soot layer in helping to trap particles. Contrary to the number concentration results, particle mass emissions decreased from 6 +/- 1 mg/hp-hr before regeneration to 3 +/- 2 mg/hp-hr after regeneration. This indicates that nanoparticles with diameters less than 50 nm may have been emitted after regeneration because these particles contribute little to the total mass. Overall, average particle emission reductions of 95% by mass and 10,000-fold by number concentration after 4 yr of use provided evidence of the durability of a field-aged DPF. In contrast to previous reports for new DPFs in which elevated number concentrations occurred during the first 200 sec of a transient cycle, the number concentration emissions were elevated during the second half of the heavy-duty Federal Test Procedure (FTP) when high speed was sustained. This information is relevant for the analysis of mechanisms by which particles are emitted from field-aged DPFs. PMID:20842937
Barone, Teresa L; Storey, John Morse; Domingo, Norberto
2010-01-01
A field-aged, passive diesel particulate filter (DPF) employed in a school bus retrofit program was evaluated for emissions of particle mass and number concentration before, during and after regeneration. For the particle mass measurements, filter samples were collected for gravimetric analysis with a partial flow sampling system, which sampled proportionally to the exhaust flow. Total number concentration and number-size distributions were measured by a condensation particle counter and scanning mobility particle sizer, respectively. The results of the evaluation show that the number concentration emissions decreased as the DPF became loaded with soot. However after soot removal by regeneration, the number concentration emissions were approximately 20 times greater, which suggests the importance of the soot layer in helping to trap particles. Contrary to the number concentration results, particle mass emissions decreased from 6 1 mg/hp-hr before regeneration to 3 2 mg/hp-hr after regeneration. This indicates that nanoparticles with diameter less than 50 nm may have been emitted after regeneration since these particles contribute little to the total mass. Overall, average particle emission reductions of 95% by mass and 10,000-fold by number concentration after four years of use provided evidence of the durability of a field-aged DPF. In contrast to previous reports for new DPFs in which elevated number concentrations occurred during the first 200 seconds of a transient cycle, the number concentration emissions were elevated during the second half of the heavy-duty federal test procedure when high speed was sustained. This information is relevant for the analysis of mechanisms by which particles are emitted from field-aged DPFs.
NASA Astrophysics Data System (ADS)
Walker, Eric; Rayman, Sean; White, Ralph E.
2015-08-01
A particle filter (PF) is shown to be more accurate than non-linear least squares (NLLS) and an unscented Kalman filter (UKF) for predicting the remaining useful life (RUL) and time until end of discharge voltage (EODV) of a Lithium-ion battery. The three algorithms, i.e. PF, UKF, and NLLS track four states with correct initial estimates of the states and 5% variation on the initial state estimates. The four states are data-driven, equivalent circuit properties or Lithium concentrations and electroactive surface areas depending on the model. The more accurate prediction performance of PF over NLLS and UKF is reported for three Lithium-ion battery models: a data-driven empirical model, an equivalent circuit model, and a physics-based single particle model.
Matched filter optimization of kSZ measurements with a reconstructed cosmological flow field
NASA Astrophysics Data System (ADS)
Li, Ming; Angulo, R. E.; White, S. D. M.; Jasche, J.
2014-09-01
We develop and test a new statistical method to measure the kinematic Sunyaev-Zel'dovich (kSZ) effect. A sample of independently detected clusters is combined with the cosmic flow field predicted from a galaxy redshift survey in order to derive a matched filter that optimally weights the kSZ signal for the sample as a whole given the noise involved in the problem. We apply this formalism to realistic mock microwave skies based on cosmological N-body simulations, and demonstrate its robustness and performance. In particular, we carefully assess the various sources of uncertainty, cosmic microwave background primary fluctuations, instrumental noise, uncertainties in the determination of the velocity field, and effects introduced by miscentring of clusters and by uncertainties of the mass-observable relation (normalization and scatter). We show that available data (Planck maps and the MaxBCG catalogue) should deliver a 7.7? detection of the kSZ. A similar cluster catalogue with broader sky coverage should increase the detection significance to ˜13?. We point out that such measurements could be binned in order to study the properties of the cosmic gas and velocity fields, or combined into a single measurement to constrain cosmological parameters or deviations of the law of gravity from General Relativity.
Optimal ensemble size of ensemble Kalman filter in sequential soil moisture data assimilation
NASA Astrophysics Data System (ADS)
Yin, Jifu; Zhan, Xiwu; Zheng, Youfei; Hain, Christopher R.; Liu, Jicheng; Fang, Li
2015-08-01
The ensemble Kalman filter (EnKF) has been extensively applied in sequential soil moisture data assimilation to improve the land surface model performance and in turn weather forecast capability. Usually, the ensemble size of EnKF is determined with limited sensitivity experiments. Thus, the optimal ensemble size may have never been reached. In this work, based on a series of mathematical derivations, we demonstrate that the maximum efficiency of the EnKF for assimilating observations into the models could be reached when the ensemble size is set to 12. Simulation experiments are designed in this study under ensemble size cases 2, 5, 12, 30, 50, 100, and 300 to support the mathematical derivations. All the simulations are conducted from 1 June to 30 September 2012 over southeast USA (from -90°W, 30°N to -80°W, 40°N) at 25 km resolution. We found that the simulations are perfectly consistent with the mathematical derivation. This optical ensemble size may have theoretical implications on the implementation of EnKF in other sequential data assimilation problems.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2005-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.
2007-01-01
A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.
Lee, Chang Jun
2015-12-01
In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study. PMID:26027708
LEE, Chang Jun
2015-01-01
In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study. PMID:26027708
Ashbaugh, Lowell L; Eldred, Robert A
2004-01-01
The extent of mass loss on Teflon filters caused by ammonium nitrate volatilization can be a substantial fraction of the measured particulate matter with an aerodynamic diameter less than 2.5 microm (PM2.5) or 10 microm (PM10) mass and depends on where and when it was collected. There is no straightforward method to correct for the mass loss using routine monitoring data. In southern California during the California Acid Deposition Monitoring Program, 30-40% of the gravimetric PM2.5 mass was lost during summer daytime. Lower mass losses occurred at more remote locations. The estimated potential mass loss in the Interagency Monitoring of Protected Visual Environments network was consistent with the measured loss observed in California. The biased mass measurement implies that use of Federal Reference Method data for fine particles may lead to control strategies that are biased toward sources of fugitive dust, other primary particle emission sources, and stable secondary particles (e.g., sulfates). This analysis clearly supports the need for speciated analysis of samples collected in a manner that preserves volatile species. Finally, although there is loss of volatile nitrate (NO3-) from Teflon filters during sampling, the NO3- remaining after collection is quite stable. We found little loss of NO3- from Teflon filters after 2 hr under vacuum and 1 min of heating by a cyclotron proton beam. PMID:14871017
Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization
Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee
2014-01-01
Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature. PMID:25285268
Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization.
Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee
2014-10-01
Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature. PMID:25285268
Recovering Latent Time-Series from their Observed Sums: Network Tomography with Particle Filters.
Murphy, Robert F.
, Parti- cle Filter, Informative Priors, Empirical Bayes 1. INTRODUCTION Knowledge about the origin, typically from ob- served sums. Our driving application is 'network tomogra- phy', where we need to estimate, sums of desirable OD flows. In this paper we propose i-FILTER, a method to solve this problem, which
Bystroff, Chris
discontinuities. This fact makes Kalman filters (which assume linearity) and extended Kalman filters (which assume Why is autonomous grasping and manipulation in unstruc- tured environments still so hard for robots a database, has a vision system that can track the object, and tactile sensors that can aid tracking when
arXiv:0912.4072v1[math.NA]21Dec2009 Stochastic global optimization as a filtering problem
Del Moral , Pierre
arXiv:0912.4072v1[math.NA]21Dec2009 Stochastic global optimization as a filtering problem Panos a reformulation of stochastic global optimization as a fil- tering problem. The motivation behind algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection
The determination and optimization of (rutile) pigment particle size distributions
NASA Technical Reports Server (NTRS)
Richards, L. W.
1972-01-01
A light scattering particle size test which can be used with materials having a broad particle size distribution is described. This test is useful for pigments. The relation between the particle size distribution of a rutile pigment and its optical performance in a gray tint test at low pigment concentration is calculated and compared with experimental data.
Piehowski, Paul D.; Petyuk, Vladislav A.; Sandoval, John D.; Burnum, Kristin E.; Kiebel, Gary R.; Monroe, Matthew E.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.
2013-03-01
For bottom-up proteomics there are a wide variety of database searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection - referred to as STEPS - utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types.
NASA Astrophysics Data System (ADS)
Varjonen, Mari; Strömmer, Pekka
2008-03-01
This paper presents the optimized image quality and average glandular dose in digital mammography, and provides recommendations concerning anode-filter combinations in digital mammography, which is based on amorphous selenium (a-Se) detector technology. The full field digital mammography (FFDM) system based on a-Se technology, which is also a platform of tomosynthesis prototype, was used in this study. X-ray tube anode-filter combinations, which we studied, were tungsten (W) - rhodium (Rh) and tungsten (W) - silver (Ag). Anatomically adaptable fully automatic exposure control (AAEC) was used. The average glandular doses (AGD) were calculated using a specific program developed by Planmed, which automates the method described by Dance et al. Image quality was evaluated in two different ways: a subjective image quality evaluation, and contrast and noise analysis. By using W-Rh and W-Ag anode-filter combinations can be achieved a significantly lower average glandular dose compared with molybdenum (Mo) - molybdenum (Mo) or Mo-Rh. The average glandular dose reduction was achieved from 25 % to 60 %. In the future, the evaluation will concentrate to study more filter combinations and the effect of higher kV (>35 kV) values, which seems be useful while optimizing the dose in digital mammography.
I. M. Neklyudov; O. P. Ledenyov; L. I. Fedorova; P. Ya. Poltinin
2013-06-11
The spatial distributions of the small dispersive coal dust particles with the nano and micro sizes in the granular filtering medium with the cylindrical coal granules in the absorber in the horizontal iodine air filter during its long term operation at the nuclear power plant are researched. It is shown that the concentration density maxima of the small dispersive coal dust particles appear in the granular filtering medium with the cylindrical coal absorbent granules in the horizontal iodine air filter at an action by the air dust aerosol blow. The comparison of the measured aerodynamic resistances of the horizontal and vertical iodine air filters is conducted. The main conclusion is that the magnitude of the aerodynamic resistance of the horizontal iodine air filters is much smaller in comparison with the magnitude of the aerodynamic resistance of the vertical iodine air filters at the same loads of the air dust aerosol volumes. It is explained that the direction of the air dust aerosol blow and the direction of the gravitation force in the horizontal iodine air filter are orthogonal, hence the effective accumulation of the small dispersive coal dust particles takes place at the bottom of absorber in the horizontal iodine air filter. It is found that the air dust aerosol stream flow in the horizontal iodine air filter is not limited by the appearing structures, made of the precipitated small dispersive coal dust particles, in distinction from the vertical iodine air filter, in the process of long term operation of the iodine air filters at the nuclear power plant.
Saito, Masatoshi
2007-11-15
Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm{sup 2} iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.
Yue, Qinyan; Han, Shuxin; Yue, Min; Gao, Baoyu; Li, Qian; Yu, Hui; Zhao, Yaqin; Qi, Yuanfeng
2009-11-01
Two lab-scale upflow biological anaerobic filters (BAF) packed with sludge-fly ash ceramic particles (SFCP) and commercial ceramic particles (CCP) were employed to investigate effects of the C/N ratios and filter media on the BAF performance during the restart period. The results indicated that BAF could be restarted normally after one-month cease. The C/N ratio of 4.0 was the thresholds of nitrate removal and nitrite accumulation. TN removal and phosphate uptake reached the maximum value at the same C/N ratio of 5.5. Ammonia formation was also found and excreted a negative influence on TN removal, especially when higher C/N ratios were applied. Nutrients were mainly degraded within the height of 25 cm from the bottom. In addition, SFCP, as novel filter media manufactured by wastes-dewatered sludge and fly ash, represented a better potential in inhibiting nitrite accumulation, TN removal and phosphate uptake due to their special characteristics in comparison with CCP. PMID:19520569
NASA Astrophysics Data System (ADS)
Ukasha, M.; Ramirez, J. A.
2014-12-01
Signals from Gravity Recovery and Climate Experiments (GRACE) twin satellites mission mapping the time invariant earth's gravity field are degraded due to measurement and leakage errors. Dampening of these errors using different filters results in a modification of the true geophysical signals. Therefore, use of a scale factor is suggested to recover the modified signals. For basin averaged dS/dt anomalies computed from data available at University of Colorado GRACE data analysis website - http://geoid.colorado.edu/grace/, optimal time invariant and time variant scale factors for Sacramento and San Joaquin river basins, California, are derived using observed precipitation (P), runoff (Q) and evapotranspiration (ET). Using the derived optimal scaling factor for GRACE data filtered using a 300 km- wide gaussian filter resulted in scaled GRACE dS/dt anomalies that match better with observed dS/dt anomalies (P-ET-Q) as compared to the GRACE dS/dt anomalies computed from scaled GRACE product at University of Colorado GRACE data analysis website. This paper will present the procedure, the optimal values, and the statistical analysis of the results.
A radiative transfer scheme that considers absorption, scattering, and distribution of light-absorbing elemental carbon (EC) particles collected on a quartz-fiber filter was developed to explain simultaneous filter reflectance and transmittance observations prior to and during...
I. M. Neklyudov; O. P. Ledenyov; L. I. Fedorova; P. Ya. Poltinin
2013-02-18
The main purpose of research is to determine the influence by the small dispersive coal dust particles of the different fractional consistence on the technical characteristics of the vertical iodine air filter at nuclear power plant. The research on the transport properties of the small dispersive coal dust particles in the granular filtering medium of absorber in the vertical iodine air filter is completed in the case, when the modeled aerodynamic conditions are similar to the real aerodynamic conditions. It is shown that the appearance of the different fractional consistence of small dispersive coal dust particles with the decreasing dimensions down to the micro and nano sizes at the action of the air dust aerosol stream normally results in a significant change of distribution of the small dispersive coal dust particles masses in the granular filtering medium of an absorber in the vertical iodine air filter, changing the vertical iodine air filter aerodynamic characteristics. The precise characterization of the aerodynamic resistance of a model of the vertical iodine air filter is completed. The comparative analysis of the technical characteristics of the vertical and horizontal iodine air filters is also made.
Coffey, B.M.; Krasner, S.W.; Sclimenti, M.J.; Hacker, P.A.; Gramith, J.T.
1996-11-01
Biofiltration tests were performed at the Metropolitan Water District of Southern California`s 5.5-mgd (21,000 m{sup 3}d) demonstration plant using two 400 ft{sup 2} (37 m{sup 2}) anthracite/sand filters and a 6 ft{sup 2} (0.56 m{sup 2}) granular activated carbon (GAC)/sand filter operated in parallel. The empty-bed contact time (EBCT) within the GAC and anthracite ranged from 2.1-3.1 min. The filters were evaluated based on (1) conventional filtration performance (turbidity, particle removal, and headloss); (2) removal of biodegradable ozone by-products (assimilable organic carbon [AOC], aldehydes, and aldoketoacids) after startup; (3) removal of biodegradable ozone by-products at steady state; and (4) resistance to short-term process upsets such as intermittent chlorination or filter out-of-service time. Approximately 80 percent formaldehyde removal was achieved by the anthracite/sand filter operated at a 2.1-min EBCT (6 gpm/ft{sup 2} [15 m/h]) within 8 days of ozone operation. The GAC/sand filter operated at the same rate achieved 80 percent removal within 1 day, possibly as an additive effect of adsorption and biological removal. In-depth aldehyde monitoring at four depths (0.5-min EBCT intervals) provided additional insight into the removal kinetics. During periods of warmer water temperature, from 20 to 48 percent of the AOC was removed in the flocculation/sedimentation basins by 40-75 percent. This percentage removal typically resulted in AOC concentrations within 40 {mu}g C/L of the raw, unozonated water levels.
NASA Astrophysics Data System (ADS)
Jiang, Xun-Gao
1995-01-01
The energy resolution of a prism-mirror-prism (PMP) imaging energy filter, used for electron energy loss microanalysis, is limited by the aperture aberrations of its magnetic prism. The aberrations can be minimized by appropriately curving the pole-faces of the prism. In this thesis a computer-aided design procedure is described for optimizing the curvatures. The procedure accurately takes into account the influence of fringing fields on the optical properties of the prism and allows a realistic performance evaluation. An optimized PMP filter with an improved resolution has been developed in this way. For example, at an incident electron energy of 80 keV and an acceptance half-angle of 10 mradian, the filter has a resolution of 1.3 eV, a factor of 18 better than that of an equivalent system with a straight-face prism. The validity of the filter design depends on the correct determination of fringing magnetic fields. To verify the theoretical field calculations, a oscillating -loop magnetometer has been built. The device has a linear spatial resolution of 0.1 mm, and is well suited for measuring rapidly decreasing fringing fields. The measured fringing field distribution is in good agreement with the theoretical calculations within a maximum discrepancy of +/- 1% B_0, with B_0 being the uniform flux density inside the prism. The new PMP filter has been constructed and installed on a Siemens EM-102 microscope in our laboratory. Under the experimental conditions of an operating voltage of 60 kV and an acceptance half-angle of 8.5 mradian, the resolution of the filter is 0.5 eV, defined as the measured full-width-at-half-maximum of the intensity distribution of the aberration figure on the energy selecting plane. The much improved energy resolution of the optimized PMP imaging filter has made it possible to explore an exciting area of electron energy loss microanalysis, the detection and localization of molecular compounds by their characteristic excitations. A preliminary study, using embedded hematin (a chromophore) crystals as test specimens, has clearly demonstrated the feasibility of this technique in the presence of beam-induced radiation damage.
The impact of filtering direct-feedthrough on the x-space theory of magnetic particle imaging
NASA Astrophysics Data System (ADS)
Lu, Kuan; Goodwill, Patrick; Zheng, Bo; Conolly, Steven
2011-03-01
Magnetic particle imaging (MPI) is a new medical imaging modality that maps the instantaneous response of superparamagnetic particles under an applied magnetic field. In MPI, the excitation and detection of the nanoparticles occur simultaneously. Therefore, when a sinusoidal excitation field is applied to the system, the received signal spectrum contains both harmonics from the particles and a direct feedthrough signal from the source at the fundamental drive frequency. Removal of the induced feedthrough signal from the received signal requires significant filtering, which also removes part of the signal spectrum. In this paper, we present a method to investigate the impact of temporally filtering out individual lower order harmonics on the reconstructed x-space image. Analytic and simulation results show that the loss of particle signal at low frequency leads to a recoverable loss of low spatial frequency information in the x-space image. Initial experiments validate the findings and demonstrate the feasibility of the recovery of the lost signal. This builds on earlier work that discusses the ideal one-dimensional MPI system and harmonic decomposition of the MPI signal.
Soontrapa, Chaiyod
2005-01-01
Modifying material properties provides another approach to optimize coated particle fuel used in pebble bed reactors. In this study, the MIT fuel performance model (TIMCOAT) was applied after benchmarking against the ...
NASA Astrophysics Data System (ADS)
Mandal, K. K.; Tudu, B.; Chakraborty, N.
Optimum scheduling of hydrothermal plants is an important task for economic operation of power systems. Many evolutionary techniques such as particle swarm optimization, differential evolution have been applied to solve these problems and found to perform in a better way in comparison with conventional optimization methods. But often these methods converge to a sub-optimal solution prematurely. This paper presents a new improved particle swarm optimization technique called self-organizing hierarchical particle swarm optimization technique with time-varying acceleration coefficients (SOHPSO_TVAC) for solving daily economic generation scheduling of hydrothermal systems to avoid premature convergence. The performance of the proposed method is demonstrated on a sample test system comprising of cascaded reservoirs. The results obtained by the proposed methods are compared with other methods. The results show that the proposed technique is capable of producing comparable results.
Donner, René; Menze, Bjoern H.; Bischof, Horst; Langs, Georg
2013-01-01
The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates’ weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. PMID:23664450
Characterization and optimization of acoustic filter performance by experimental design methodology.
Gorenflo, Volker M; Ritter, Joachim B; Aeschliman, Dana S; Drouin, Hans; Bowen, Bruce D; Piret, James M
2005-06-20
Acoustic cell filters operate at high separation efficiencies with minimal fouling and have provided a practical alternative for up to 200 L/d perfusion cultures. However, the operation of cell retention systems depends on several settings that should be adjusted depending on the cell concentration and perfusion rate. The impact of operating variables on the separation efficiency performance of a 10-L acoustic separator was characterized using a factorial design of experiments. For the recirculation mode of separator operation, bioreactor cell concentration, perfusion rate, power input, stop time and recirculation ratio were studied using a fractional factorial 2(5-1) design, augmented with axial and center point runs. One complete replicate of the experiment was carried out, consisting of 32 more runs, at 8 runs per day. Separation efficiency was the primary response and it was fitted by a second-order model using restricted maximum likelihood estimation. By backward elimination, the model equation for both experiments was reduced to 14 significant terms. The response surface model for the separation efficiency was tested using additional independent data to check the accuracy of its predictions, to explore robust operation ranges and to optimize separator performance. A recirculation ratio of 1.5 and a stop time of 2 s improved the separator performance over a wide range of separator operation. At power input of 5 W the broad range of robust high SE performance (95% or higher) was raised to over 8 L/d. The reproducible model testing results over a total period of 3 months illustrate both the stable separator performance and the applicability of the model developed to long-term perfusion cultures. PMID:15858795
Generating Optimal Initial Conditions for Smoothed Particle Hydrodynamics Simulations
NASA Astrophysics Data System (ADS)
Diehl, S.; Rockefeller, G.; Fryer, C. L.; Riethmiller, D.; Statler, T. S.
2015-12-01
We review existing smoothed particle hydrodynamics setup methods and outline their advantages, limitations, and drawbacks. We present a new method for constructing initial conditions for smoothed particle hydrodynamics simulations, which may also be of interest for N-body simulations, and demonstrate this method on a number of applications. This new method is inspired by adaptive binning techniques using weighted Voronoi tessellations. Particles are placed and iteratively moved based on their proximity to neighbouring particles and the desired spatial resolution. This new method can satisfy arbitrarily complex spatial resolution requirements.
Wang, S L; Singer, M A
2009-07-13
The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using three-dimensional computational fluid dynamics. Three-dimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steady-state flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.
Adaptive linear prediction of radiation belt electrons using the Kalman filter
Adaptive linear prediction of radiation belt electrons using the Kalman filter E. J. Rigler, D. N system identification scheme, based on the Kalman filter with process noise, to determine optimal time: Energetic particles, trapped; 2722 Magnetospheric Physics: Forecasting; KEYWORDS: Kalman filter, electron
NASA Astrophysics Data System (ADS)
Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen
2015-01-01
Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.
Nature-Inspired Particle Mechanics Algorithm for Multi-Objective Optimization
Lau, Francis C.M.
, significant bio-inspired approaches such as genetic algorithm (GA) (1975) [5], ant colony optimization (ACONature-Inspired Particle Mechanics Algorithm for Multi-Objective Optimization Xiang Feng. This chapter proposes a new computational algorithm whose design is inspired by par- ticle mechanics in physics
Optimal Antenna Pattern Design For Synthetic Aperture Radar Using Particle Swarm Intelligence
Myung, Noh-Hoon
Optimal Antenna Pattern Design For Synthetic Aperture Radar Using Particle Swarm Intelligence # S operation concepts, SAR requires the various antenna patterns in order to meet the system performance such as range ambiguity. An antenna pattern optimization method for improvement of the range ambiguity
Evacuation dynamic and exit optimization of a supermarket based on particle swarm optimization
NASA Astrophysics Data System (ADS)
Li, Lin; Yu, Zhonghai; Chen, Yang
2014-12-01
A modified particle swarm optimization algorithm is proposed in this paper to investigate the dynamic of pedestrian evacuation from a fire in a public building-a supermarket with multiple exits and configurations of counters. Two distinctive evacuation behaviours featured by the shortest-path strategy and the following-up strategy are simulated in the model, accounting for different categories of age and sex of the pedestrians along with the impact of the fire, including gases, heat and smoke. To examine the relationship among the progress of the overall evacuation and the layout and configuration of the site, a series of simulations are conducted in various settings: without a fire and with a fire at different locations. Those experiments reveal a general pattern of two-phase evacuation, i.e., a steep section and a flat section, in addition to the impact of the presence of multiple exits on the evacuation along with the geographic locations of the exits. For the study site, our simulations indicated the deficiency of the configuration and the current layout of this site in the process of evacuation and verified the availability of proposed solutions to resolve the deficiency. More specifically, for improvement of the effectiveness of the evacuation from the site, adding an exit between Exit 6 and Exit 7 and expanding the corridor at the right side of Exit 7 would significantly reduce the evacuation time.
Particle Swarm Optimization Algorithm for Optimizing Assignment of Blood in Blood Banking System
Olusanya, Micheal O.; Arasomwan, Martins A.; Adewumi, Aderemi O.
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment. PMID:25815046
Hashim, Rathiah; Noor Elaiza, Abd Khalid; Irtaza, Aun
2014-01-01
One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF) coupled with support vector machine (SVM) has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO). The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations. PMID:25121136
Particle swarm optimization algorithm for optimizing assignment of blood in blood banking system.
Olusanya, Micheal O; Arasomwan, Martins A; Adewumi, Aderemi O
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment. PMID:25815046
NASA Astrophysics Data System (ADS)
Semwal, Girish; Rastogi, Vipul
2016-01-01
Grating assisted surface plasmon resonance waveguide grating has been designed and optimized for the sensing application. Adaptive particle swarm optimization in conjunction with derivative free method for mode computation has been used for design optimization of LPWG sensor. Effect of metal thickness and cladding layer thickness on the core mode and surface plasmon mode has been analyzed in detail. Results have been utilized as benchmarks for deciding the bounds of these variables in the optimization process. Two waveguides structures have been demonstrated for the grating assisted surface plasmon resonance refractive index sensor. The sensitivity of the designed sensors has been achieved 3.5×104 nm/RIU and 5.0×104 nm/RIU with optimized waveguide and grating parameters.
Design of a Fractional Order PID Controller Using Particle Swarm Optimization Technique
Maiti, Deepyaman; Konar, Amit
2008-01-01
Particle Swarm Optimization technique offers optimal or suboptimal solution to multidimensional rough objective functions. In this paper, this optimization technique is used for designing fractional order PID controllers that give better performance than their integer order counterparts. Controller synthesis is based on required peak overshoot and rise time specifications. The characteristic equation is minimized to obtain an optimum set of controller parameters. Results show that this design method can effectively tune the parameters of the fractional order controller.
Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali
2015-01-01
In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature. PMID:25799141
Gill, K; Aldoohan, S; Collier, J
2014-06-01
Purpose: Study image optimization and radiation dose reduction in pediatric shunt CT scanning protocol through the use of different beam-hardening filters Methods: A 64-slice CT scanner at OU Childrens Hospital has been used to evaluate CT image contrast-to-noise ratio (CNR) and measure effective-doses based on the concept of CT dose index (CTDIvol) using the pediatric head shunt scanning protocol. The routine axial pediatric head shunt scanning protocol that has been optimized for the intrinsic x-ray tube filter has been used to evaluate CNR by acquiring images using the ACR approved CT-phantom and radiation dose CTphantom, which was used to measure CTDIvol. These results were set as reference points to study and evaluate the effects of adding different filtering materials (i.e. Tungsten, Tantalum, Titanium, Nickel and Copper filters) to the existing filter on image quality and radiation dose. To ensure optimal image quality, the scanner routine air calibration was run for each added filter. The image CNR was evaluated for different kVps and wide range of mAs values using above mentioned beam-hardening filters. These scanning protocols were run under axial as well as under helical techniques. The CTDIvol and the effective-dose were measured and calculated for all scanning protocols and added filtration, including the intrinsic x-ray tube filter. Results: Beam-hardening filter shapes energy spectrum, which reduces the dose by 27%. No noticeable changes in image low contrast detectability Conclusion: Effective-dose is very much dependent on the CTDIVol, which is further very much dependent on beam-hardening filters. Substantial reduction in effective-dose is realized using beam-hardening filters as compare to the intrinsic filter. This phantom study showed that significant radiation dose reduction could be achieved in CT pediatric shunt scanning protocols without compromising in diagnostic value of image quality.
Gain, James
Particle Swarm Optimization with Spatially Meaningful Neighbours James Lane, Andries Engelbrecht and James Gain Abstract-- Neighbourhood topologies in particle swarm op- timization (PSO) are typically in multimodal problems, as demonstrated by the Fitness Euclidean Ratio (FER) PSO [5]. James Lane and James Gain
Stevens, G.A.; Moyer, E.S.
1989-05-01
The efficiency of filter media is dependent on the characteristics of the challenge aerosol and the filter's construction. Challenge aerosol parameters, such as particle size, density, shape, electrical charge, and flow rate, are influential in determining the filter's efficiency. In this regard, a so-called ''worst case'' set of conditions has been proposed for testing respirator filter efficiency in order to ensure wearer protection. Data collected on various types of filters (dust and mist; dust, fume, and mist; paint, lacquer, and enamel mist; and high efficiency) challenged with a worst case-type sodium chloride (NaCl) and dioctyl phthalate (DOP) aerosol are presented. The particle size of maximum penetration varies as a function of filter type and was less than 0.25-micron count mean diameter (CMD) in all cases. The count efficiency for high efficiency filters was greater than 99.97% at worst case testing conditions, but the worst case count efficiencies for dust and mist; dust, fume and mist; and paint, lacquer and enamel mist filters were not nearly as efficient as existing test methods indicate. Also, as the test flow rate is increased, the count efficiency decreases. Thus, respirator filters were found to conform to the prediction of single-fiber filtration theory.
Stevens, G A; Moyer, E S
1989-05-01
The efficiency of filter media is dependent on the characteristics of the challenge aerosol and the filter's construction. Challenge aerosol parameters, such as particle size, density, shape, electrical charge, and flow rate, are influential in determining the filter's efficiency. In this regard, a so-called "worst case" set of conditions has been proposed for testing respirator filter efficiency in order to ensure wearer protection. Data collected on various types of filters (dust and mist; dust, fume, and mist; paint, lacquer, and enamel mist; and high efficiency) challenged with a worst case-type sodium chloride (NaCl) and dioctyl phthalate (DOP) aerosol are presented. The particle size of maximum penetration varies as a function of filter type and was less than 0.25-micron count mean diameter (CMD) in all cases. The count efficiency for high efficiency filters was greater than 99.97% at worst case testing conditions, but the worst case count efficiencies for dust and mist; dust, fume and mist; and paint, lacquer and enamel mist filters were not nearly as efficient as existing test methods indicate. Also, as the test flow rate is increased, the count efficiency decreases. Thus, respirator filters were found to conform to the prediction of single-fiber filtration theory. PMID:2729101
NASA Astrophysics Data System (ADS)
Comani, S.; Mantini, D.; Alleva, G.; Di Luzio, S.; Romani, G. L.
2005-12-01
The greatest impediment to extracting high-quality fetal signals from fetal magnetocardiography (fMCG) is environmental magnetic noise, which may have peak-to-peak intensity comparable to fetal QRS amplitude. Being an unstructured Gaussian signal with large disturbances at specific frequencies, ambient field noise can be reduced with hardware-based approaches and/or with software algorithms that digitally filter magnetocardiographic recordings. At present, no systematic evaluation of filters' performances on shielded and unshielded fMCG is available. We designed high-pass and low-pass Chebychev II-type filters with zero-phase and stable impulse response; the most commonly used band-pass filters were implemented combining high-pass and low-pass filters. The achieved ambient noise reduction in shielded and unshielded recordings was quantified, and the corresponding signal-to-noise ratio (SNR) and signal-to-distortion ratio (SDR) of the retrieved fetal signals was evaluated. The study regarded 66 fMCG datasets at different gestational ages (22-37 weeks). Since the spectral structures of shielded and unshielded magnetic noise were very similar, we concluded that the same filter setting might be applied to both conditions. Band-pass filters (1.0-100 Hz) and (2.0-100 Hz) provided the best combinations of fetal signal detection rates, SNR and SDR; however, the former should be preferred in the case of arrhythmic fetuses, which might present spectral components below 2 Hz.
NASA Astrophysics Data System (ADS)
Wallace, Lance A.; Emmerich, Steven J.; Howard-Reed, Cynthia
Airborne particles are implicated in morbidity and mortality of certain high-risk subpopulations. Exposure to particles occurs mostly indoors, where a main removal mechanism is deposition to surfaces. Deposition can be affected by the use of forced-air circulation through ducts or by air filters. In this study, we calculate the deposition rates of particles in an occupied house due to forced-air circulation and the use of in-duct filters such as electrostatic precipitators (ESP) and fibrous mechanical filters (MECH). Deposition rates are calculated for 128 size categories ranging from 0.01 to 2.5 ?m. More than 110 separate "events" (mostly cooking, candle burning, and pouring kitty litter) were used to calculate deposition rates for four conditions: fan off, fan on, MECH installed, ESP installed. For all cases, deposition rates varied in a "U"-shaped distribution with the minimum occurring near 0.1 ?m, as predicted by theory. The use of the central fan with no filter or with a standard furnace filter increased deposition rates by amounts on the order of 0.1-0.5 h -1. The MECH increased deposition rates by up to 2 h -1 for ultrafine and fine particles but was ineffective for particles in the 0.1-0.5 ?m range. The ESP increased deposition rates by 2-3 h -1 and was effective for all sizes. However, the ESP lost efficiency after several weeks and needed regular cleaning to maintain its effectiveness. A reduction of particle levels by 50% or more could be achieved by use of the ESP when operating properly. Since the use of fans and filters reduces particle concentrations from both indoor and outdoor sources, it is more effective than the alternative approach of reducing ventilation by closing windows or insulating homes more tightly. For persons at risk, use of an air filter may be an effective method of reducing exposure to particles.
A one-step screening process for optimal alignment of (soft) colloidal particles
NASA Astrophysics Data System (ADS)
Hiltl, Stephanie; Oltmanns, Jens; Böker, Alexander
2012-11-01
We developed nanostructured gradient wrinkle surfaces to establish a one-step screening process towards optimal assembly of soft and hard colloidal particles (microgel systems and silica particles). Thereby, we simplify studies on the influence of wrinkle dimensions (wavelength, amplitude) on particle properties and their alignment. In a combinatorial experiment, we optimize particle assembly regarding the ratio of particle diameter vs. wrinkle wavelength and packing density and point out differences between soft and hard particles. The preparation of wrinkle gradients in oxidized top layers on elastic poly(dimethylsiloxane) (PDMS) substrates is based on a controlled wrinkling approach. Partial shielding of the substrate during plasma oxidation is crucial to obtain two-dimensional gradients with amplitudes ranging from 7 to 230 nm and wavelengths between 250 and 900 nm.We developed nanostructured gradient wrinkle surfaces to establish a one-step screening process towards optimal assembly of soft and hard colloidal particles (microgel systems and silica particles). Thereby, we simplify studies on the influence of wrinkle dimensions (wavelength, amplitude) on particle properties and their alignment. In a combinatorial experiment, we optimize particle assembly regarding the ratio of particle diameter vs. wrinkle wavelength and packing density and point out differences between soft and hard particles. The preparation of wrinkle gradients in oxidized top layers on elastic poly(dimethylsiloxane) (PDMS) substrates is based on a controlled wrinkling approach. Partial shielding of the substrate during plasma oxidation is crucial to obtain two-dimensional gradients with amplitudes ranging from 7 to 230 nm and wavelengths between 250 and 900 nm. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr32710d
Optimization of HTR fuel design to reduce fuel particle failures
Boer, B.; Kloosterman, J. L.; Ougouag, A. M.
2006-07-01
In this paper, an attempt is made to formulate criteria that can be used in the redesign of HTR fuel. A simplified fuel performance model is setup to calculate the fuel particle failure probability as a function of the TRISO particle design and the particle packing fraction. These models require knowledge of the fast neutron dose, the fuel burnup level, and the fuel temperature. In this paper, a neutronic, thermal-hydraulic and burnup calculations for the PBMR 400 MWth design are used to provide the fuel performance model with the required data. It was found that the failure impact increases considerably with increasing number of particles and reactor operating temperature, but decreases with a larger buffer layer. (authors)
Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem
Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi
2013-01-01
Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429
NASA Astrophysics Data System (ADS)
Siade, A. J.; Prommer, H.; Welter, D.
2014-12-01
Groundwater management and remediation requires the implementation of numerical models in order to evaluate the potential anthropogenic impacts on aquifer systems. In many situations, the numerical model must, not only be able to simulate groundwater flow and transport, but also geochemical and biological processes. Each process being simulated carries with it a set of parameters that must be identified, along with differing potential sources of model-structure error. Various data types are often collected in the field and then used to calibrate the numerical model; however, these data types can represent very different processes and can subsequently be sensitive to the model parameters in extremely complex ways. Therefore, developing an appropriate weighting strategy to address the contributions of each data type to the overall least-squares objective function is not straightforward. This is further compounded by the presence of potential sources of model-structure errors that manifest themselves differently for each observation data type. Finally, reactive transport models are highly nonlinear, which can lead to convergence failure for algorithms operating on the assumption of local linearity. In this study, we propose a variation of the popular, particle swarm optimization algorithm to address trade-offs associated with the calibration of one data type over another. This method removes the need to specify weights between observation groups and instead, produces a multi-dimensional Pareto front that illustrates the trade-offs between data types. We use the PEST++ run manager, along with the standard PEST input/output structure, to implement parallel programming across multiple desktop computers using TCP/IP communications. This allows for very large swarms of particles without the need of a supercomputing facility. The method was applied to a case study in which modeling was used to gain insight into the mobilization of arsenic at a deepwell injection site. Multiple data types (e.g., hydrochemical, geophysical, tracer, temperature, etc.) were collected prior to, and during an injection trial. Visualizing the trade-off between the calibration of each data type has provided the means of identifying some model-structure deficiencies.
Pseudo-gradient based particle swarm optimization for nonconvex economic dispatch
NASA Astrophysics Data System (ADS)
Vo, Dieu N.; Schegner, Peter; Ongsakul, Weerakorn
2012-11-01
This paper proposes a pseudo-gradient based particle swarm optimization (PGPSO) method for solving nonconvex economic dispatch (ED) including valve point effects, multiple fuels and prohibited operating zones. The proposed PGPSO is based on the self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients (HPSO-TVAC) with position of particles guided by a pseudo-gradient. The pseudo-gradient here is to determine an appropriate direction for the particles during their movement so that they can quickly move to an optimal solution. The proposed method has been tested on several systems and the obtained results are compared to those from many other methods available in the literature. Test results have indicated that the proposed method can obtain less expensive total costs than many others in a faster computing manner, especially for the large-scale problems. Therefore, the proposed PGPSO is favorable for online implementation in the practical ED problems.
Chen, Zaigao; Wang, Jianguo; Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi'an, Shaanxi 710024 ; Wang, Yue; Qiao, Hailiang; Zhang, Dianhui; Guo, Weijie
2013-11-15
Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.
Author's personal copy 3-D simulation of particle filtration in electrospun nanofibrous filters
Tafreshi, Hooman Vahedi
. The numerical simulations presented here are believed to be the most complete and realistic filter modeling a technique known as electrospinning (the term "nanofiber" is often used in practice for fibers with a fiber models, starting with the work of [2], are developed using oversimplified 2-D geometries, wherein fibers
A Particle Filtering Approach to FM-Band Passive Radar Tracking and Automatic Target Recognition
Lanterman, Aaron
recognition is made possible by the inclusion of radar cross section (RCS) in the measurement vector. The extended Kalman filter cannot take advantage of radar cross section measurements because there is no closed is needed is another type of measurement. In this paper, we propose the use of radar cross section (RCS
Introducing the Modeling of Uniform-Size Particle of Single Media Filter by Using Stella
NASA Astrophysics Data System (ADS)
Jusoh, Ahmad; Giap, Sunny Goh Eng; Ibrahim, Mohd Zamri; Jaafar, Izan
2008-05-01
Granular filter is widely used in removing suspended substance from treating water. Its reliability is well accepted and has been modified into different modes of operation and media combination, etc., in order to accommodate its different purposes of usage. A more conventional method in supplying adequate volume of potable water with minimum expenses involve, gravity flow of rapid filtration is normally used. Conventionally, engineer refers to filter design criteria in tabulated form of table before designing filtration unit. A slight different from conventional design parameters would normally require time for re-calculation. The current studies have been conducted to develop a simulation model with user interface that is governed by some well accepted equations, such as, Carmen-Kozeny and Ives-Gregory models. This work is done to ease the work of engineers in making design decision. The simulated results are the total amount of water produced, time required in reaching particular head-loss, amount of particulate mass accumulated within the filter, and the designated total head-loss. Overall, the developed simulation model could be effectively implemented for filter design purpose, after some verification through laboratory experimentation or compilation of past experiments.
Burkhart, Timothy A; Dunning, Cynthia E; Andrews, David M
2011-10-13
The fundamental nature of impact testing requires a cautious approach to signal processing, to minimize noise while preserving important signal information. However, few recommendations exist regarding the most suitable filter frequency cut-offs to achieve these goals. Therefore, the purpose of this investigation is twofold: to illustrate how residual analysis can be utilized to quantify optimal system-specific filter cut-off frequencies for force, moment, and acceleration data resulting from in-vitro upper extremity impacts, and to show how optimal cut-off frequencies can vary based on impact condition intensity. Eight human cadaver radii specimens were impacted with a pneumatic impact testing device at impact energies that increased from 20J, in 10J increments, until fracture occurred. The optimal filter cut-off frequency for pre-fracture and fracture trials was determined with a residual analysis performed on all force and acceleration waveforms. Force and acceleration data were filtered with a dual pass, 4th order Butterworth filter at each of 14 different cut-off values ranging from 60Hz to 1500Hz. Mean (SD) pre-fracture and fracture optimal cut-off frequencies for the force variables were 605.8 (82.7)Hz and 513.9 (79.5)Hz, respectively. Differences in the optimal cut-off frequency were also found between signals (e.g. Fx (medial-lateral), Fy (superior-inferior), Fz (anterior-posterior)) within the same test. These optimal cut-off frequencies do not universally agree with the recommendations of filtering all upper extremity impact data using a cut-off frequency of 600Hz. This highlights the importance of quantifying the filter frequency cut-offs specific to the instrumentation and experimental set-up. Improper digital filtering may lead to erroneous results and a lack of standardized approaches makes it difficult to compare findings of in-vitro dynamic testing between laboratories. PMID:21903214
Approach to analytically minimize the LCD moiré by image-based particle swarm optimization.
Tsai, Yu-Lin; Tien, Chung-Hao
2015-10-01
In this paper, we proposed a methodology to optimize the parametric window of a liquid crystal display (LCD) system, whose visual performance was deteriorated by the pixel moiré arising in between multiple periodic structures. Conventional analysis and minimization of moiré patterns are limited by few parameters. With the proposed image-based particle swarm optimization (PSO), we enable a multivariable optimization at the same time. A series of experiments was conducted to validate the methodology. Due to its versatility, the proposed technique will certainly have a promising impact on the fast optimization in LCD design with more complex configuration. PMID:26479663
Explanation of Particle Swarm Optimization and the Application in Power Systems
NASA Astrophysics Data System (ADS)
Aoki, Hidenori; Mizutani, Yoshibumi
This paper presents the explanation and the application of meta-heuristics in power systems. General optimizations are defined as an iterative solution search algorithm that makes use of simple rules or heuristics to obtain better solutions. In recent years, the research reports of meta-heuristics have been positively made to solve the complicated optimization problems. In this paper, the recent trends on the application of particle swarm optimization method are discussed, which can be applied to the solution of a wide range of optimization problems with both continuous and discrete variables.
Korovyanko, O. J.; Rey-de-Castro, R.; Elles, C. G.; Crowell, R. A.; Li, Y.
2006-01-01
The temporal output of a Ti:Sapphire laser system has been optimized using an acousto-optic programmable dispersive filter and a genetic algorithm. In-situ recording the evolution of spectral phase, amplitude and temporal pulse profile for each iteration of the algorithm using SPIDER shows that we are able to lock the spectral phase of the laser pulse within a narrow margin. By using the second harmonic of the CPA laser as feedback for the genetic algorithm, it has been demonstrated that severe mismatch between the compressor and stretcher can be compensated for in a short period of time.
A Particle Swarm Optimization Variant with an Inner Variable Learning Strategy
Pedrycz, Witold; Liu, Jin
2014-01-01
Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge. PMID:24587746
Optimized Non-Obstructive Particle Damping (NOPD) Treatment for Composite Honeycomb Structures
NASA Technical Reports Server (NTRS)
Panossian, H.
2008-01-01
Non-Obstructive Particle Damping (NOPD) technology is a passive vibration damping approach whereby metallic or non-metallic particles in spherical or irregular shapes, of heavy or light consistency, and even liquid particles are placed inside cavities or attached to structures by an appropriate means at strategic locations, to absorb vibration energy. The objective of the work described herein is the development of a design optimization procedure and discussion of test results for such a NOPD treatment on honeycomb (HC) composite structures, based on finite element modeling (FEM) analyses, optimization and tests. Modeling and predictions were performed and tests were carried out to correlate the test data with the FEM. The optimization procedure consisted of defining a global objective function, using finite difference methods, to determine the optimal values of the design variables through quadratic linear programming. The optimization process was carried out by targeting the highest dynamic displacements of several vibration modes of the structure and finding an optimal treatment configuration that will minimize them. An optimal design was thus derived and laboratory tests were conducted to evaluate its performance under different vibration environments. Three honeycomb composite beams, with Nomex core and aluminum face sheets, empty (untreated), uniformly treated with NOPD, and optimally treated with NOPD, according to the analytically predicted optimal design configuration, were tested in the laboratory. It is shown that the beam with optimal treatment has the lowest response amplitude. Described below are results of modal vibration tests and FEM analyses from predictions of the modal characteristics of honeycomb beams under zero, 50% uniform treatment and an optimal NOPD treatment design configuration and verification with test data.
NASA Astrophysics Data System (ADS)
Wang, Xinke; Bi, Chenyang; Xu, Ying
2015-09-01
Measurements of gas/particle partition coefficients for semivolatile organic compounds (SVOCs) using filter-sorbent samplers can be biased if a fraction of gas-phase mass is measured erroneously as particle-phase due to sorption of SVOC gases to the filter, or, if a fraction of particle-phase mass is measured erroneously as gas-phase due to penetration of particles into the sorbent. A fundamental mechanistic model to characterize the air sampling process with filter-sorbent samplers for SVOCs was developed and partially validated. The potential sampling artifacts associated with measurements of gas-particle partitioning were examined for 19 SVOCs. Positive sampling bias (i.e., overestimation of gas/particle partition coefficients) was observed for almost all the SVOCs. For certain compounds, the measured partition coefficient was several orders of magnitude greater than the presumed value. It was found that the sampling artifacts can be ignored when the value of log [Kf /(Kp ?Cp , a) ] is less than 7. By normalizing the model, general factors that influence the sampling artifacts were investigated. Correlations were obtained between the dimensionless time required for the gas-phase SVOCs within the filter to reach steady state (Ts,s?) and the chemical Vp values, which can be used to estimate appropriate sampling time. The potential errors between measured and actual gas/particle partition coefficients of SVOCs as a function of sampling velocity and time were calculated and plotted for a range of SVOCs (vapor pressures: 10-8 ? 10-3 Pa). These plots were useful in identifying bias from the sampling in previously-completed field measurements. Penetration of particles into the sorbent may result in significant underestimation of the partition coefficient for particles in the size range between 10 nm and 2 ?m. For most of the selected compounds, backup filters can be used to correct artifacts effectively. However, for some compounds with very low vapor pressure, the artifacts remained or became even larger than they were without the backup filter. Thus, the option of backup filters must be considered carefully in field measurements of the gas/particle partitioning of SVOCs. The results of this work will allow researchers to predict potential artifacts associated with SVOC gas/particle partitioning as functions of compounds, the concentration of particles, the distribution of particle sizes, sampling velocity, and sampling time.
Catalysis of Reduction and Oxidation Reactions for Application in Gas Particle Filters
Udron, L.; Turek, T.
2002-09-19
The present study is a first part of an investigation addressing the simultaneous occurrence of oxidation and reduction reactions in catalytic filters. It has the objectives (a) to assess the state of knowledge regarding suitable (types of) catalysts for reduction and oxidation, (b) to collect and analyze published information about reaction rates of both NOx reduction and VOC oxidation, and (c) to adjust a lab-scale screening method to the requirements of an activity test with various oxidation/reduction catalysts.
Candiani, Gabriele; Carnevale, Claudio; Finzi, Giovanna; Pisoni, Enrico; Volta, Marialuisa
2013-08-01
To fulfill the requirements of the 2008/50 Directive, which allows member states and regional authorities to use a combination of measurement and modeling to monitor air pollution concentration, a key approach to be properly developed and tested is the data assimilation one. In this paper, with a focus on regional domains, a comparison between optimal interpolation and Ensemble Kalman Filter is shown, to stress pros and drawbacks of the two techniques. These approaches can be used to implement a more accurate monitoring of the long-term pollution trends on a geographical domain, through an optimal combination of all the available sources of data. The two approaches are formalized and applied for a regional domain located in Northern Italy, where the PM10 level which is often higher than EU standard limits is measured. PMID:23639906
Bethel, E. Wes; Bethel, E. Wes
2012-01-06
This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.
Nunez, L.; Kaminski, M.; Bradley, C.; Buchholz, B.A.; Aase, S.B.; Tuazon, H.E.; Vandegrift, G.F.; Landsberger, S.
1995-05-01
The Magnetically Assisted Chemical Separation (MACS) process combines the selectivity afforded by solvent extractants with magnetic separation by using specially coated magnetic particles to provide a more efficient chemical separation of transuranic (TRU) elements, other radionuclides, and heavy metals from waste streams. Development of the MACS process uses chemical and physical techniques to elucidate the properties of particle coatings and the extent of radiolytic and chemical damage to the particles, and to optimize the stages of loading, extraction, and particle regeneration. This report describes the development of a separation process for TRU elements from various high-level waste streams. Polymer-coated ferromagnetic particles with an adsorbed layer of octyl(phenyl)-N,N-diisobutylcarbamoylmethylphosphine oxide (CMPO) diluted with tributyl phosphate (TBP) were evaluated for use in the separation and recovery of americium and plutonium from nuclear waste solutions. Due to their chemical nature, these extractants selectively complex americium and plutonium contaminants onto the particles, which can then be recovered from the solution by using a magnet. The partition coefficients were larger than those expected based on liquid[liquid extractions, and the extraction proceeded with rapid kinetics. Extractants were stripped from the particles with alcohols and 400-fold volume reductions were achieved. Particles were more sensitive to acid hydrolysis than to radiolysis. Overall, the optimization of a suitable NMCS particle for TRU separation was achieved under simulant conditions, and a MACS unit is currently being designed for an in-lab demonstration.
A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments
NASA Technical Reports Server (NTRS)
McDowell, Mark
2008-01-01
An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent
Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition
NASA Technical Reports Server (NTRS)
Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.
2011-01-01
Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.
Alleman, T. L.; Eudy, L.; Miyasato, M.; Oshinuga, A.; Allison, S.; Corcoran, T.; Chatterjee, S.; Jacobs, T.; Cherrillo, R. A.; Clark, R.; Virrels, I.; Nine, R.; Wayne, S.; Lansing, R.
2005-11-01
A fleet of six 2001 International Class 6 trucks operating in southern California was selected for an operability and emissions study using gas-to-liquid (GTL) fuel and catalyzed diesel particle filters (CDPF). Three vehicles were fueled with CARB specification diesel fuel and no emission control devices (current technology), and three vehicles were fueled with GTL fuel and retrofit with Johnson Matthey's CCRT diesel particulate filter. No engine modifications were made.
Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng
2015-01-01
Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158
Optimal optical filters of fluorescence excitation and emission for poultry fecal detection
Technology Transfer Automated Retrieval System (TEKTRAN)
Purpose: An analytic method to design excitation and emission filters of a multispectral fluorescence imaging system is proposed and was demonstrated in an application to poultry fecal inspection. Methods: A mathematical model of a multispectral imaging system is proposed and its system parameters, ...
Solution to Electric Power Dispatch Problem Using Fuzzy Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
Chaturvedi, D. K.; Kumar, S.
2015-03-01
This paper presents the application of fuzzy particle swarm optimization to constrained economic load dispatch (ELD) problem of thermal units. Several factors such as quadratic cost functions with valve point loading, ramp rate limits and prohibited operating zone are considered in the computation models. The Fuzzy particle swarm optimization (FPSO) provides a new mechanism to avoid premature convergence problem. The performance of proposed algorithm is evaluated on four test systems. Results obtained by proposed method have been compared with those obtained by PSO method and literature results. The experimental results show that proposed FPSO method is capable of obtaining minimum fuel costs in fewer numbers of iterations.
NASA Astrophysics Data System (ADS)
Arya, Rajesh; Purey, Pradeep
2015-06-01
MW-generation rescheduling is being considered for voltage stability improvement under stressed operating condition. At times it can avoid voltage collapse. This paper describes an algorithm for determination of optimum MW-generation participation pattern for static voltage stability margin enhancement. The optimum search direction has been obtained by employing modified bare born particle swarm optimization technique. Optimum search direction is based on maximization of distance to point of collapse in generation space. Developed algorithm has been implemented on a standard 25 bus test system. Results obtained have been compared with those obtained using standard particle swarm optimization.
Neklyudov, I M; Fedorova, L I; Poltinin, P Ya
2013-01-01
The spatial distributions of the small dispersive coal dust particles with the nano and micro sizes in the granular filtering medium with the cylindrical coal granules in the absorber in the horizontal iodine air filter during its long term operation at the nuclear power plant are researched. It is shown that the concentration density maxima of the small dispersive coal dust particles appear in the granular filtering medium with the cylindrical coal absorbent granules in the horizontal iodine air filter at an action by the air dust aerosol blow. The comparison of the measured aerodynamic resistances of the horizontal and vertical iodine air filters is conducted. The main conclusion is that the magnitude of the aerodynamic resistance of the horizontal iodine air filters is much smaller in comparison with the magnitude of the aerodynamic resistance of the vertical iodine air filters at the same loads of the air dust aerosol volumes. It is explained that the direction of the air dust aerosol blow and the directi...
Apparatus and method for concentrating and filtering particles suspended in a fluid
Fiechtner, Gregory J. (Bethesda, MD); Cummings, Eric B. (Livermore, CA); Singh, Anup K. (Danville, CA)
2009-05-19
Disclosed is a device for separating and concentrating particles suspended in a fluid stream by using dielectrophoresis (DEP) to trap and/or deflect those particles as they migrate through a fluid channel. The method uses fluid channels designed to constrain a liquid flowing through it to uniform electrokinetic flow velocities. This behavior is achieved by connecting deep and shallow sections of channels, with the channel depth varying abruptly along an interface. By careful design of abrupt changes in specific permeability at the interface, an abrupt and spatially uniform change in electrokinetic force can be selected. Because these abrupt interfaces also cause a sharp gradient in applied electric fields, a DEP force also can be established along the interface. Depending on the complex conductivity of the suspended particles and the immersion liquid, the DEP force can controllably complement or oppose the local electrokinetic force transporting the fluid through the channel allowing for manipulation of particles suspended in the transporting liquid.
NASA Astrophysics Data System (ADS)
Cross, E. S.; Sappok, A.; Carrasquillo, A. J.; Onasch, T. B.; Fortner, E.; Jayne, J.; Wong, V.; Worsnop, D. R.; Kroll, J. H.
2010-12-01
Diesel engine emissions constitute an important source of particulate black carbon (BC) and gas phase organics in the atmosphere. Particles composed of black carbon absorb incoming solar radiation having a net positive radiative forcing effect on the climate. Black carbon also has major air quality implications as BC particles from combustion sources are often coated with poly-aromatic hydrocarbons (PAHs), and are generally emitted in higher concentrations close to population centers. Regulations of diesel emissions target the mass of particulate matter (PM) and concentration of volatile gas phase organic compounds (VOC) produced. A third, potentially important component of diesel exhaust, is low volatility organic compounds (LVOC). Both the VOCs and LVOCs can lead to the formation of ultrafine particles (via homogeneous nucleation) and secondary organic aerosols (via oxidation). Recent development of mass spectrometric techniques to measure particulate black carbon and gas phase organics provide the opportunity to quantify and chemically characterize diesel emissions in real-time. Measurements of both the particulate and gas phase emissions from a medium-duty diesel engine will be presented. The experimental apparatus includes a diesel particulate filter (DPF) integrated in the exhaust line, which is a requirement for all 2007 and newer on-road diesel engines in the U.S. Measurements taken over the regeneration cycle of the DPF provide insight into how this after-treatment technology influences the gas phase and particle phase composition of the emissions. Gas phase measurements were made with a newly developed Total Gas-Phase Organic (TGO) instrument. Particulate species were characterized with a Soot Particle Aerosol Mass Spectrometer (SP-AMS). The combined utility of the TGO and SP-AMS instruments for emissions characterization studies will be demonstrated.
NASA Astrophysics Data System (ADS)
Giffin, Paxton K.; Parsons, Michael S.; Unz, Ronald J.; Waggoner, Charles A.
2012-05-01
The Institute for Clean Energy Technology (ICET) at Mississippi State University has developed a test stand capable of lifecycle testing of high efficiency particulate air filters and other filters specified in American Society of Mechanical Engineers Code on Nuclear Air and Gas Treatment (AG-1) filters. The test stand is currently equipped to test AG-1 Section FK radial flow filters, and expansion is currently underway to increase testing capabilities for other types of AG-1 filters. The test stand is capable of producing differential pressures of 12.45 kPa (50 in. w.c.) at volumetric air flow rates up to 113.3 m3/min (4000 CFM). Testing is performed at elevated and ambient conditions for temperature and relative humidity. Current testing utilizes three challenge aerosols: carbon black, alumina, and Arizona road dust (A1-Ultrafine). Each aerosol has a different mass median diameter to test loading over a wide range of particles sizes. The test stand is designed to monitor and maintain relative humidity and temperature to required specifications. Instrumentation is implemented on the upstream and downstream sections of the test stand as well as on the filter housing itself. Representative data are presented herein illustrating the test stand's capabilities. Digital images of the filter pack collected during and after testing is displayed after the representative data are discussed. In conclusion, the ICET test stand with AG-1 filter testing capabilities has been developed and hurdles such as test parameter stability and design flexibility overcome.
Rod-filter-field optimization of the J-PARC RF-driven H{sup ?} ion source
Ueno, A. Ohkoshi, K.; Ikegami, K.; Takagi, A.; Yamazaki, S.; Oguri, H.
2015-04-08
In order to satisfy the Japan Proton Accelerator Research Complex (J-PARC) second-stage requirements of an H{sup ?} ion beam of 60mA within normalized emittances of 1.5?mm•mrad both horizontally and vertically, a flat top beam duty factor of 1.25% (500?s×25Hz) and a life-time of longer than 1month, the J-PARC cesiated RF-driven H{sup ?} ion source was developed by using an internal-antenna developed at the Spallation Neutron Source (SNS). Although rod-filter-field (RFF) is indispensable and one of the most beam performance dominative parameters for the RF-driven H{sup ?} ion source with the internal-antenna, the procedure to optimize it is not established. In order to optimize the RFF and establish the procedure, the beam performances of the J-PARC source with various types of rod-filter-magnets (RFMs) were measured. By changing RFM’s gap length and gap number inside of the region projecting the antenna inner-diameter along the beam axis, the dependence of the H{sup ?} ion beam intensity on the net 2MHz-RF power was optimized. Furthermore, the fine-tuning of RFM’s cross-section (magnetmotive force) was indispensable for easy operation with the temperature (T{sub PE}) of the plasma electrode (PE) lower than 70°C, which minimizes the transverse emittances. The 5% reduction of RFM’s cross-section decreased the time-constant to recover the cesium effects after an slightly excessive cesiation on the PE from several 10 minutes to several minutes for T{sub PE} around 60°C.
NASA Technical Reports Server (NTRS)
Houts, R. C.; Vaughn, G. L.
1974-01-01
Three algorithms are developed for designing finite impulse response digital filters to be used for pulse shaping and channel equalization. The first is the Minimax algorithm which uses linear programming to design a frequency-sampling filter with a pulse shape that approximates the specification in a minimax sense. Design examples are included which accurately approximate a specified impulse response with a maximum error of 0.03 using only six resonators. The second algorithm is an extension of the Minimax algorithm to design preset equalizers for channels with known impulse responses. Both transversal and frequency-sampling equalizer structures are designed to produce a minimax approximation of a specified channel output waveform. Examples of these designs are compared as to the accuracy of the approximation, the resultant intersymbol interference (ISI), and the required transmitted energy. While the transversal designs are slightly more accurate, the frequency-sampling designs using six resonators have smaller ISI and energy values.
Han, Shuxin; Yue, Qinyan; Yue, Min; Gao, Baoyu; Zhao, Yaqin; Cheng, Wenjing
2009-02-01
Novel media-sludge-fly ash ceramic particles (SFCP) employed in an upflow lab-scale A/O BAF were investigated for synthetic wastewater treatment. The influences of hydraulic retention time (HRT), air-liquid ratio (A/L) and recirculation on the removals of chemical oxygen demand (CODcr), ammonia (NH(4)(+)-N) and total nitrogen (TN) were discussed. The optimum operation conditions were obtained as HRT of 2.0 h, A/L of 15:1 and 200% recirculation. Under the optimal conditions, 90% CODcr, more than 98% NH(3)-N and approximately 70% TN were removed. The average consumed volumetric loading rates for CODcr, NH(4)(+)-N and TN with 200% recirculation were 4.06, 0.36 and 0.29 kg(m(3)d)(-1), respectively. The CODcr and TN removal mainly occurred in the anoxic zone, while nitrification was completed at the height of 70 cm from the inlet of the bottom due to a suitable column layout of biological aerated filter (BAF). The characteristics of wastewater and backwashing affected TN removal to a large degree. In addition, the features of media (SFCP) and synthetic wastewater contributed to a strong buffer capacity in the BAF system so that the effluent pH at different media height fluctuated slightly and was insensitive to recirculation. PMID:18828988
NASA Astrophysics Data System (ADS)
Moradkhani, Hamid; Dechant, Caleb M.; Sorooshian, Soroosh
2012-12-01
Particle filters (PFs) have become popular for assimilation of a wide range of hydrologic variables in recent years. With this increased use, it has become necessary to increase the applicability of this technique for use in complex hydrologic/land surface models and to make these methods more viable for operational probabilistic prediction. To make the PF a more suitable option in these scenarios, it is necessary to improve the reliability of these techniques. Improved reliability in the PF is achieved in this work through an improved parameter search, with the use of variable variance multipliers and Markov Chain Monte Carlo methods. Application of these methods to the PF allows for greater search of the posterior distribution, leading to more complete characterization of the posterior distribution and reducing risk of sample impoverishment. This leads to a PF that is more efficient and provides more reliable predictions. This study introduces the theory behind the proposed algorithm, with application on a hydrologic model. Results from both real and synthetic studies suggest that the proposed filter significantly increases the effectiveness of the PF, with marginal increase in the computational demand for hydrologic prediction.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Development of a Design Tool for Flow Rate Optimization in the Tata Swach Water Filter
Ricks, Sean T.
When developing a first-generation product, an iterative approach often yields the shortest time-to-market. In order to optimize its performance, however, a fundamental understanding of the theory governing its operation ...
Optimizing flow rate and bacterial removal performance of ceramic pot filters in Tamale, Ghana
Zhang, Yiyue, S.M. Massachusetts Institute of Technology
2015-01-01
Pure Home Water (PHW) is an organization that seeks to improve the drinking water quality for those who do not have access to clean water in Northern Ghana. This study focuses on the further optimization of ceramic pot ...
Key Node Selection for Containing Infectious Disease Spread Using Particle Swarm Optimization
Wong, Limsoon
Key Node Selection for Containing Infectious Disease Spread Using Particle Swarm Optimization Xiuju infectious diseases have grown into global health threats due to high human mobility. It is important to have intervention plans for containing the spread of such infectious diseases. Among various intervention strategies
Smooth Path Planning of a Mobile Robot Using Stochastic Particle Swarm Optimization
Li, Yangmin
Smooth Path Planning of a Mobile Robot Using Stochastic Particle Swarm Optimization Xin Chen exploration ability is developed, so that a swarm with small size can accomplish path planning. Simulation results validate the proposed algorithm in a mobile robot path planning. I. INTRODUCTION Smooth navigation
Optimization of cement and fly ash particle sizes to produce sustainable concretes Dale P. Bentz a,
Bentz, Dale P.
Optimization of cement and fly ash particle sizes to produce sustainable concretes Dale P. Bentz a and Technology, 100 Bureau Drive, Stop 7313, Gaithersburg, MD 20899-7313, USA b Roman Cement LLC, Salt Lake City form 29 April 2011 Accepted 30 April 2011 Available online 7 May 2011 Keywords: Blended cement Design
PATH SAMPLING FOR PARTICLE FILTERS WITH APPLICATION TO MULTI-TARGET TRACKING
- entific and engineering applications including radar and signal processing, air traffic control and GPS-target tracking. The suggested approach was based on Girsanov's change of measure theorem for stochastic dif. The numerical results show that the suggested approach can improve significantly the performance of a particle
NASA Astrophysics Data System (ADS)
Suman, A.; Mukerji, T.; Fernandez Martinez, J.
2010-12-01
Time lapse seismic data has begun to play an important role in reservoir characterization, management and monitoring. It can provide information on the dynamics of fluids in the reservoir based on the relation between variations of seismic signals and movement of hydrocarbons and changes in formation pressure. Reservoir monitoring by repeated seismic or time lapse surveys can help in reducing the uncertainties attached to reservoir models. In combination with geological and flow modeling as a part of history matching process it can provide better description of the reservoir and thus better reservoir forecasting. However joint inversion of seismic and flow data for reservoir parameter is highly non-linear and complex. Stochastic optimization based inversion has shown very good results in integration of time-lapse seismic and production data in reservoir history matching. In this paper we have used a family of particle swarm optimizers for inversion of semi-synthetic Norne field data set. We analyze the performance of the different PSO optimizers, both in terms of exploration and convergence rate. Finally we also show some promising and preliminary results of the application of differential evolution. All of the versions of PSO provide an acceptable match with the original synthetic model. The advantage of using global optimization method is that uncertainty can be assessed near the optimum point. To assess uncertainty near the optimum point we keep track of all particles over all iterations that have an objective function value below a selected cutoff. With these particles we plot the best, E-type and IQR (Inter quartile range) of porosity and permeability for each version of PSO. To compute uncertainty measures using a stochastic optimizer algorithm care has to be taken not to oversample the optimal point. We keep track of the evolution of the median distance between the global best in each of the iterations and the particles of the swarm. When this distance is smaller than a certain percentage of the initial value this means that the swarm has collapsed towards the global best particle. In the posterior uncertainty all the particles in this collapsed swarm has to be counted as one to prevent oversampling of the optimal point. Finally, based on the selected samples it is possible to produce averages (E-types) over the samples and interquartile range maps that help us to establish facies probabilities. Our results indicate that the PSO family has strong potential as an optimizer in joint inversion of time-lapse seismic data and production data. These algorithms will be further applied to the Norne field E-segment data.
Georgy, Jacques; Noureldin, Aboelmagd
2011-01-01
Satellite navigation systems such as the global positioning system (GPS) are currently the most common technique used for land vehicle positioning. However, in GPS-denied environments, there is an interruption in the positioning information. Low-cost micro-electro mechanical system (MEMS)-based inertial sensors can be integrated with GPS and enhance the performance in denied GPS environments. The traditional technique for this integration problem is Kalman filtering (KF). Due to the inherent errors of low-cost MEMS inertial sensors and their large stochastic drifts, KF, with its linearized models, has limited capabilities in providing accurate positioning. Particle filtering (PF) was recently suggested as a nonlinear filtering technique to accommodate for arbitrary inertial sensor characteristics, motion dynamics and noise distributions. An enhanced version of PF called the Mixture PF is utilized in this study to perform tightly coupled integration of a three dimensional (3D) reduced inertial sensors system (RISS) with GPS. In this work, the RISS consists of one single-axis gyroscope and a two-axis accelerometer used together with the vehicle’s odometer to obtain 3D navigation states. These sensors are then integrated with GPS in a tightly coupled scheme. In loosely-coupled integration, at least four satellites are needed to provide acceptable GPS position and velocity updates for the integration filter. The advantage of the tightly-coupled integration is that it can provide GPS measurement update(s) even when the number of visible satellites is three or lower, thereby improving the operation of the navigation system in environments with partial blockages by providing continuous aiding to the inertial sensors even during limited GPS satellite availability. To effectively exploit the capabilities of PF, advanced modeling for the stochastic drift of the vertically aligned gyroscope is used. In order to benefit from measurement updates for such drift, which are loosely-coupled updates, a hybrid loosely/tightly coupled solution is proposed. This solution is suitable for downtown environments because of the long natural outages or degradation of GPS. The performance of the proposed 3D Navigation solution using Mixture PF for 3D RISS/GPS integration is examined by road test trajectories in a land vehicle and compared to the KF counterpart. PMID:22163846
Georgy, Jacques; Noureldin, Aboelmagd
2011-01-01
Satellite navigation systems such as the global positioning system (GPS) are currently the most common technique used for land vehicle positioning. However, in GPS-denied environments, there is an interruption in the positioning information. Low-cost micro-electro mechanical system (MEMS)-based inertial sensors can be integrated with GPS and enhance the performance in denied GPS environments. The traditional technique for this integration problem is Kalman filtering (KF). Due to the inherent errors of low-cost MEMS inertial sensors and their large stochastic drifts, KF, with its linearized models, has limited capabilities in providing accurate positioning. Particle filtering (PF) was recently suggested as a nonlinear filtering technique to accommodate for arbitrary inertial sensor characteristics, motion dynamics and noise distributions. An enhanced version of PF called the Mixture PF is utilized in this study to perform tightly coupled integration of a three dimensional (3D) reduced inertial sensors system (RISS) with GPS. In this work, the RISS consists of one single-axis gyroscope and a two-axis accelerometer used together with the vehicle's odometer to obtain 3D navigation states. These sensors are then integrated with GPS in a tightly coupled scheme. In loosely-coupled integration, at least four satellites are needed to provide acceptable GPS position and velocity updates for the integration filter. The advantage of the tightly-coupled integration is that it can provide GPS measurement update(s) even when the number of visible satellites is three or lower, thereby improving the operation of the navigation system in environments with partial blockages by providing continuous aiding to the inertial sensors even during limited GPS satellite availability. To effectively exploit the capabilities of PF, advanced modeling for the stochastic drift of the vertically aligned gyroscope is used. In order to benefit from measurement updates for such drift, which are loosely-coupled updates, a hybrid loosely/tightly coupled solution is proposed. This solution is suitable for downtown environments because of the long natural outages or degradation of GPS. The performance of the proposed 3D Navigation solution using Mixture PF for 3D RISS/GPS integration is examined by road test trajectories in a land vehicle and compared to the KF counterpart. PMID:22163846
Wang Yan; Mohanty, Soumya D.
2010-03-15
The detection and estimation of gravitational wave signals belonging to a parameterized family of waveforms requires, in general, the numerical maximization of a data-dependent function of the signal parameters. Because of noise in the data, the function to be maximized is often highly multimodal with numerous local maxima. Searching for the global maximum then becomes computationally expensive, which in turn can limit the scientific scope of the search. Stochastic optimization is one possible approach to reducing computational costs in such applications. We report results from a first investigation of the particle swarm optimization method in this context. The method is applied to a test bed motivated by the problem of detection and estimation of a binary inspiral signal. Our results show that particle swarm optimization works well in the presence of high multimodality, making it a viable candidate method for further applications in gravitational wave data analysis.
Optimal-Flow Minimum-Cost Correspondence Assignment in Particle Flow Tracking
Matov, Alexandre; Edvall, Marcus M.; Yang, Ge; Danuser, Gaudenz
2011-01-01
A diversity of tracking problems exists in which cohorts of densely packed particles move in an organized fashion, however the stability of individual particles within the cohort is low. Moreover, the flows of cohorts can regionally overlap. Together, these conditions yield a complex tracking scenario that can not be addressed by optical flow techniques that assume piecewise coherent flows, or by multiparticle tracking techniques that suffer from the local ambiguity in particle assignment. Here, we propose a graph-based assignment of particles in three consecutive frames to recover from image sequences the instantaneous organized motion of groups of particles, i.e. flows. The algorithm makes no a priori assumptions on the fraction of particles participating in organized movement, as this number continuously alters with the evolution of the flow fields in time. Graph-based assignment methods generally maximize the number of acceptable particles assignments between consecutive frames and only then minimize the association cost. In dense and unstable particle flow fields this approach produces many false positives. The here proposed approach avoids this via solution of a multi-objective optimization problem in which the number of assignments is maximized while their total association cost is minimized at the same time. The method is validated on standard benchmark data for particle tracking. In addition, we demonstrate its application to live cell microscopy where several large molecular populations with different behaviors are tracked. PMID:21720496
Steiner, Sandro; Czerwinski, Jan; Comte, Pierre; Heeb, Norbert V; Mayer, Andreas; Petri-Fink, Alke; Rothen-Rutishauser, Barbara
2015-08-01
Metal-containing fuel additives catalyzing soot combustion in diesel particle filters are used in a widespread manner, and with the growing popularity of diesel vehicles, their application is expected to increase in the near future. Detailed investigation into how such additives affect exhaust toxicity is therefore necessary and has to be performed before epidemiological evidence points towards adverse effects of their application. The present study investigates how the addition of an iron-based fuel additive (Satacen®3, 40 ppm Fe) to low-sulfur diesel affects the in vitro cytotoxic, oxidative, (pro-)inflammatory, and mutagenic activity of the exhaust of a passenger car operated under constant, low-load conditions by exposing a three-dimensional model of the human airway epithelium to complete exhaust at the air-liquid interface. We could show that the use of the iron catalyst without and with filter technology has positive as well as negative effects on exhaust toxicity compared to exhaust with no additives: it decreases the oxidative and, compared to a non-catalyzed diesel particle filter, the mutagenic potential of diesel exhaust, but increases (pro-)inflammatory effects. The presence of a diesel particle filter also influences the impact of Satacen®3 on exhaust toxicity, and the proper choice of the filter type to be used is of importance with regards to exhaust toxicity. Figure ?. PMID:24880869
Biochemical systems identification by a random drift particle swarm optimization approach
2014-01-01
Background Finding an efficient method to solve the parameter estimation problem (inverse problem) for nonlinear biochemical dynamical systems could help promote the functional understanding at the system level for signalling pathways. The problem is stated as a data-driven nonlinear regression problem, which is converted into a nonlinear programming problem with many nonlinear differential and algebraic constraints. Due to the typical ill conditioning and multimodality nature of the problem, it is in general difficult for gradient-based local optimization methods to obtain satisfactory solutions. To surmount this limitation, many stochastic optimization methods have been employed to find the global solution of the problem. Results This paper presents an effective search strategy for a particle swarm optimization (PSO) algorithm that enhances the ability of the algorithm for estimating the parameters of complex dynamic biochemical pathways. The proposed algorithm is a new variant of random drift particle swarm optimization (RDPSO), which is used to solve the above mentioned inverse problem and compared with other well known stochastic optimization methods. Two case studies on estimating the parameters of two nonlinear biochemical dynamic models have been taken as benchmarks, under both the noise-free and noisy simulation data scenarios. Conclusions The experimental results show that the novel variant of RDPSO algorithm is able to successfully solve the problem and obtain solutions of better quality than other global optimization methods used for finding the solution to the inverse problems in this study. PMID:25078435
Grinshpun, Sergey A; Haruta, Hiroki; Eninger, Robert M; Reponen, Tiina; McKay, Roy T; Lee, Shu-An
2009-10-01
The protection level offered by filtering facepiece particulate respirators and face masks is defined by the percentage of ambient particles penetrating inside the protection device. There are two penetration pathways: (1) through the faceseal leakage, and the (2) filter medium. This study aimed at differentiating the contributions of these two pathways for particles in the size range of 0.03-1 microm under actual breathing conditions. One N95 filtering facepiece respirator and one surgical mask commonly used in health care environments were tested on 25 subjects (matching the latest National Institute for Occupational Safety and Health fit testing panel) as the subjects performed conventional fit test exercises. The respirator and the mask were also tested with breathing manikins that precisely mimicked the prerecorded breathing patterns of the tested subjects. The penetration data obtained in the human subject- and manikin-based tests were compared for different particle sizes and breathing patterns. Overall, 5250 particle size- and exercise-specific penetration values were determined. For each value, the faceseal leakage-to-filter ratio was calculated to quantify the relative contributions of the two penetration pathways. The number of particles penetrating through the faceseal leakage of the tested respirator/mask far exceeded the number of those penetrating through the filter medium. For the N95 respirator, the excess was (on average) by an order of magnitude and significantly increased with an increase in particle size (p < 0.001): approximately 7-fold greater for 0.04 microm, approximately 10-fold for 0.1 microm, and approximately 20-fold for 1 microm. For the surgical mask, the faceseal leakage-to-filter ratio ranged from 4.8 to 5.8 and was not significantly affected by the particle size for the tested submicrometer fraction. Facial/body movement had a pronounced effect on the relative contribution of the two penetration pathways. Breathing intensity and facial dimensions showed some (although limited) influence. Because most of the penetrated particles entered through the faceseal, the priority in respirator/mask development should be shifted from improving the efficiency of the filter medium to establishing a better fit that would eliminate or minimize faceseal leakage. PMID:19598054
NASA Technical Reports Server (NTRS)
Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III
1991-01-01
Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.
Bauri, Ranjit; Yadav, Devinder; Shyam Kumar, C.N.; Janaki Ram, G.D.
2015-01-01
Metal matrix composites (MMCs) exhibit improved strength but suffer from low ductility. Metal particles reinforcement can be an alternative to retain the ductility in MMCs (Bauri and Yadav, 2010; Thakur and Gupta, 2007) [1,2]. However, processing such composites by conventional routes is difficult. The data presented here relates to friction stir processing (FSP) that was used to process metal particles reinforced aluminum matrix composites. The data is the processing parameters, rotation and traverse speeds, which were optimized to incorporate Ni particles. A wide range of parameters covering tool rotation speeds from 1000 rpm to 1800 rpm and a range of traverse speeds from 6 mm/min to 24 mm/min were explored in order to get a defect free stir zone and uniform distribution of particles. The right combination of rotation and traverse speed was found from these experiments. Both as-received coarse particles (70 ?m) and ball-milled finer particles (10 ?m) were incorporated in the Al matrix using the optimized parameters. PMID:26566541
Bauri, Ranjit; Yadav, Devinder; Shyam Kumar, C N; Janaki Ram, G D
2015-12-01
Metal matrix composites (MMCs) exhibit improved strength but suffer from low ductility. Metal particles reinforcement can be an alternative to retain the ductility in MMCs (Bauri and Yadav, 2010; Thakur and Gupta, 2007) [1,2]. However, processing such composites by conventional routes is difficult. The data presented here relates to friction stir processing (FSP) that was used to process metal particles reinforced aluminum matrix composites. The data is the processing parameters, rotation and traverse speeds, which were optimized to incorporate Ni particles. A wide range of parameters covering tool rotation speeds from 1000 rpm to 1800 rpm and a range of traverse speeds from 6 mm/min to 24 mm/min were explored in order to get a defect free stir zone and uniform distribution of particles. The right combination of rotation and traverse speed was found from these experiments. Both as-received coarse particles (70 ?m) and ball-milled finer particles (10 ?m) were incorporated in the Al matrix using the optimized parameters. PMID:26566541
Zhao, Yaqin; Yue, Qinyan; Li, Renbo; Yue, Min; Han, Shuxin; Gao, Baoyu; Li, Qian; Yu, Hui
2009-11-01
Sludge-fly ash ceramic particles (SFCP) and clay ceramic particles (CCP) were employed in two lab-scale up-flow biological aerated filters (BAF) for wastewater treatment to investigate the availability of SFCP used as biofilm support compared with CCP. For synthetic wastewater, under the selected hydraulic retention times (HRT) of 1.5, 0.75 and 0.37 h, respectively, the removal efficiencies of chemical oxygen demand (COD(Cr)) and ammonium nitrogen (NH(4)(+)-N) in SFCP reactor were all higher than those of CCP reactor all through the media height. Moreover, better capabilities responding to loading shock and faster recovery after short intermittence were observed in the SFCP reactor compared with the CCP reactor. For municipal wastewater treatment, which was carried out under HRT of 0.75 h, air-liquid ratio of 7.5 and backwashing period of 48 h, the SFCP reactor also performed better than the CCP reactor, especially for the removal of NH(4)(+)-N. PMID:19540753
Sánchez, Eduardo Munera; Alcobendas, Manuel Muñoz; Noguera, Juan Fco. Blanes; Gilabert, Ginés Benet; Simó Ten, José E.
2013-01-01
This paper deals with the problem of humanoid robot localization and proposes a new method for position estimation that has been developed for the RoboCup Standard Platform League environment. Firstly, a complete vision system has been implemented in the Nao robot platform that enables the detection of relevant field markers. The detection of field markers provides some estimation of distances for the current robot position. To reduce errors in these distance measurements, extrinsic and intrinsic camera calibration procedures have been developed and described. To validate the localization algorithm, experiments covering many of the typical situations that arise during RoboCup games have been developed: ranging from degradation in position estimation to total loss of position (due to falls, ‘kidnapped robot’, or penalization). The self-localization method developed is based on the classical particle filter algorithm. The main contribution of this work is a new particle selection strategy. Our approach reduces the CPU computing time required for each iteration and so eases the limited resource availability problem that is common in robot platforms such as Nao. The experimental results show the quality of the new algorithm in terms of localization and CPU time consumption. PMID:24193098
Munera Sánchez, Eduardo; Muñoz Alcobendas, Manuel; Blanes Noguera, Juan Fco; Benet Gilabert, Ginés; Simó Ten, José E
2013-01-01
This paper deals with the problem of humanoid robot localization and proposes a new method for position estimation that has been developed for the RoboCup Standard Platform League environment. Firstly, a complete vision system has been implemented in the Nao robot platform that enables the detection of relevant field markers. The detection of field markers provides some estimation of distances for the current robot position. To reduce errors in these distance measurements, extrinsic and intrinsic camera calibration procedures have been developed and described. To validate the localization algorithm, experiments covering many of the typical situations that arise during RoboCup games have been developed: ranging from degradation in position estimation to total loss of position (due to falls, 'kidnapped robot', or penalization). The self-localization method developed is based on the classical particle filter algorithm. The main contribution of this work is a new particle selection strategy. Our approach reduces the CPU computing time required for each iteration and so eases the limited resource availability problem that is common in robot platforms such as Nao. The experimental results show the quality of the new algorithm in terms of localization and CPU time consumption. PMID:24193098
A K edge filter technique for optimization of the coherent-to-Compton scatter ratio method.
Harding, G; Armstrong, R; McDaid, S; Cooper, M J
1995-12-01
The ratio method involves forming the ratio of the elastic to inelastic x-ray scatter signals from a localized region of a scattering medium to determine its mean atomic number. An analysis is presented of two major error sources influencing the ratio method: firstly statistical (photon) noise and secondly multiple scattering and self-attenuation of the primary and scatter radiations in the medium. It is shown that a forward scattering geometry minimizes errors of both types for substances composed of elements with low and medium atomic number. However, owing to the small energy separation (approximately 100 eV) of coherent and Compton scatter for this geometry, they cannot be distinguished directly with semiconductor (e.g., Ge) detectors. A novel K edge filter technique is described which permits separation of the elastic and Compton signals in the forward-scatter geometry. The feasibility of this method is demonstrated by experimental results obtained with Ta fluorescence radiation provided by a fluorescent x-ray source filtered with an Er foil. The extension of this technique to the "in vivo" measurement of low momentum transfer inelastic scattering from biological tissues, possibly providing useful diagnostic information, is briefly discussed. PMID:8746705
EOP prediction using least square fitting and autoregressive filter over optimized data intervals
NASA Astrophysics Data System (ADS)
Xu, XueQing; Zhou, YongHong
2015-11-01
This study firstly employs the calculation of base sequence with different length, in 1-90 day predictions of EOP (the UT1-UTC and polar motion), by the combined method of least squares and autoregressive model, and find the base sequence with best result for different prediction spans, which we call as "predictions over optimized data intervals". Compared to the EOP predictions with fixed base data intervals, the "predictions over optimized data intervals" performs better for the prediction of UT1-UTC, and shows a significant improvement for the prediction of polar motion, and particularly promotes our competitive level in the international activity of Earth Orientation Parameters Combination of Prediction Pilot Project.
Kim, Seongho; Li, Lang
2013-01-01
The statistical identifiability of nonlinear pharmacokinetic (PK) models with the Michaelis-Menten (MM) kinetic equation is considered using a global optimization approach, which is particle swarm optimization (PSO). If a model is statistically non-identifiable, the conventional derivative-based estimation approach is often terminated earlier without converging, due to the singularity. To circumvent this difficulty, we develop a derivative-free global optimization algorithm by combining PSO with a derivative-free local optimization algorithm to improve the rate of convergence of PSO. We further propose an efficient approach to not only checking the convergence of estimation but also detecting the identifiability of nonlinear PK models. PK simulation studies demonstrate that the convergence and identifiablity of the PK model can be detected efficiently through the proposed approach. The proposed approach is then applied to clinical PK data along with a two-compartmental model. PMID:24216078
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
Multi-Particle Quantum Szilard Engine with Optimal Cycles Assisted by a Maxwell's Demon
C. Y. Cai; H. Dong; C. P. Sun
2011-12-03
We present a complete-quantum description of multi-particle Szilard engine which consists of a working substance and a Maxwell's demon. The demon is modeled as a multi-level quantum system with specific quantum control and the working substance consists of identical particles obeying Bose-Einstein or Fermi-Dirac statistics. In this description, a reversible scheme to erase the demon's memory by a lower temperature heat bath is used. We demonstrate that (1) the quantum control of the demon can be optimized for single-particle Szilard engine so that the efficiency of the demon-assisted thermodynamic cycle could reach the Carnot cycle's efficiency; (2) the low-temperature behavior of the working substance is very sensitive to the quantum statistics of the particles and the insertion position of the partition.
NASA Astrophysics Data System (ADS)
Ramimoghadam, Donya; Bagheri, Samira; Yousefi, Amin Termeh; Abd Hamid, Sharifah Bee
2015-11-01
In this study, nanomagnetite particles have been successfully prepared via the coprecipitation method. The effect of the key explanatory variables on the saturation magnetization of synthetic nanomagnetite particles was investigated using the response surface methodology (RSM). The correlation of the involved parameters with the growth process was examined by employing the central composite design method through designating set up experiments that will determine the interaction of the variables. The vibrating sample magnetometer (VSM) was used to confirm the statistical analysis. Furthermore, the regression analysis monitors the priority of the variables' influence on the saturation magnetization of nanomagnetite particles by developing the statistical model of the saturation magnetization. According to the investigated model, the highest interaction of variable belongs to the pH and temperature with the optimized condition of 9-11, and 75-85 °C, respectively. The response obtained by VSM suggests that the saturation magnetization of nanomagnetite particles can be controlled by restricting the effective parameters.
A Lyapunov-Based Extension to Particle Swarm Dynamics for Continuous Function Optimization
Bhattacharya, Sayantani; Konar, Amit; Das, Swagatam; Han, Sang Yong
2009-01-01
The paper proposes three alternative extensions to the classical global-best particle swarm optimization dynamics, and compares their relative performance with the standard particle swarm algorithm. The first extension, which readily follows from the well-known Lyapunov's stability theorem, provides a mathematical basis of the particle dynamics with a guaranteed convergence at an optimum. The inclusion of local and global attractors to this dynamics leads to faster convergence speed and better accuracy than the classical one. The second extension augments the velocity adaptation equation by a negative randomly weighted positional term of individual particle, while the third extension considers the negative positional term in place of the inertial term. Computer simulations further reveal that the last two extensions outperform both the classical and the first extension in terms of convergence speed and accuracy. PMID:22303158
Multi-Particle Quantum Szilard Engine with Optimal Cycles Assisted by a Maxwell's Demon
Cai, C Y; Sun, C P
2011-01-01
We present a complete-quantum description of multi-particle Szilard engine which consists of a working substance and a Maxwell's demon. The demon is modeled as a multi-level quantum system with specific quantum control and the working substance consists of identical particles obeying Bose-Einstein or Fermi-Dirac statistics. In this description, a reversible scheme to erase the demon's memory by a lower temperature heat bath is used. We demonstrate that (1) the quantum control of the demon can be optimized for single-particle Szilard engine so that the efficiency of the demon-assisted thermodynamic cycle could reach the Carnot cycle's efficiency; (2) the low-temperature behavior of the working substance is very sensitive to the quantum statistics of the particles and the insertion position of the partition.
NASA Astrophysics Data System (ADS)
Palma, Giuseppe; Bia, Pietro; Mescia, Luciano; Yano, Tetsuji; Nazabal, Virginie; Taguchi, Jun; Moréac, Alain; Prudenzano, Francesco
2014-07-01
A mid-IR amplifier consisting of a tapered chalcogenide fiber coupled to an Er-doped chalcogenide microsphere has been optimized via a particle swarm optimization (PSO) approach. More precisely, a dedicated three-dimensional numerical model, based on the coupled mode theory and solving the rate equations, has been integrated with the PSO procedure. The rate equations have included the main transitions among the erbium energy levels, the amplified spontaneous emission, and the most important secondary transitions pertaining to the ion-ion interactions. The PSO has allowed the optimal choice of the microsphere and fiber radius, taper angle, and fiber-microsphere gap in order to maximize the amplifier gain. The taper angle and the fiber-microsphere gap have been optimized to efficiently inject into the microsphere both the pump and the signal beams and to improve their spatial overlapping with the rare-earth-doped region. The employment of the PSO approach shows different attractive features, especially when many parameters have to be optimized. The numerical results demonstrate the effectiveness of the proposed approach for the design of amplifying systems. The PSO-based optimization approach has allowed the design of a microsphere-based amplifying system more efficient than a similar device designed by using a deterministic optimization method. In fact, the amplifier designed via the PSO exhibits a simulated gain G=33.7 dB, which is higher than the gain G=6.9 dB of the amplifier designed via the deterministic method.
Optimal spatial filtering and transfer function for SAR ocean wave spectra
NASA Technical Reports Server (NTRS)
Beal, R. C.; Tilley, D. G.
1981-01-01
The impulse response of the SAR system is not a delta function and the spectra represent the product of the underlying image spectrum with the transform of the impulse response which must be removed. A digitally computed spectrum of SEASAT imagery of the Atlantic Ocean east of Cape Hatteras was smoothed with a 5 x 5 convolution filter and the trend was sampled in a direction normal to the predominant wave direction. This yielded a transform of a noise-like process. The smoothed value of this trend is the transform of the impulse response. This trend is fit with either a second- or fourth-order polynomial which is then used to correct the entire spectrum. A 16 x 16 smoothing of the spectrum shows the presence of two distinct swells. Correction of the effects of speckle is effected by the subtraction of a bias from the spectrum.
A robust SVD-based image watermarking using a multi-objective particle swarm optimization
NASA Astrophysics Data System (ADS)
Loukhaoukha, K.; Nabti, M.; Zebbiche, K.
2014-03-01
The major objective in developing a robust digital watermarking algorithm is to obtain the highest possible robustness without losing the visual imperceptibility. To achieve this objective, we proposed in this paper an optimal image watermarking scheme using multi-objective particle swarm optimization (MOPSO) and singular value decomposition (SVD) in wavelet domain. Having decomposed the original image into ten sub-bands, singular value decomposition is applied to a chosen detail sub-band. Then, the singular values of the chosen sub-band are modified by multiple scaling factors (MSF) to embed the singular values of watermark image. Various combinations of multiple scaling factors are possible, and it is difficult to obtain optimal solutions. Thus, in order to achieve the highest possible robustness and imperceptibility, multi-objective optimization of the multiple scaling factors is necessary. This work employs particle swarm optimization to obtain optimum multiple scaling factors. Experimental results of the proposed approach show both the significant improvement in term of imperceptibility and robustness under various attacks.
Li, Chen; Pan, Zengxin; Mao, Feiyue; Gong, Wei; Chen, Shihua; Min, Qilong
2015-10-01
The signal-to-noise ratio (SNR) of an atmospheric lidar decreases rapidly as range increases, so that maintaining high accuracy when retrieving lidar data at the far end is difficult. To avoid this problem, many de-noising algorithms have been developed; in particular, an effective de-noising algorithm has been proposed to simultaneously retrieve lidar data and obtain a de-noised signal by combining the ensemble Kalman filter (EnKF) and the Fernald method. This algorithm enhances the retrieval accuracy and effective measure range of a lidar based on the Fernald method, but sometimes leads to a shift (bias) in the near range as a result of the over-smoothing caused by the EnKF. This study proposes a new scheme that avoids this phenomenon using a particle filter (PF) instead of the EnKF in the de-noising algorithm. Synthetic experiments show that the PF performs better than the EnKF and Fernald methods. The root mean square error of PF are 52.55% and 38.14% of that of the Fernald and EnKF methods, and PF increases the SNR by 44.36% and 11.57% of that of the Fernald and EnKF methods, respectively. For experiments with real signals, the relative bias of the EnKF is 5.72%, which is reduced to 2.15% by the PF in the near range. Furthermore, the suppression impact on the random noise in the far range is also made significant via the PF. An extensive application of the PF method can be useful in determining the local and global properties of aerosols. PMID:26480164
Robustness issues in Kalman filtering
Ruckdeschel, Peter
Robustness issues in Kalman filtering revisited Peter Ruckdeschel Fraunhofer ITWM, Abteilung possible with delay 3 Classical Method: KalmanFilter Filter Problem E xt - ft(y1:t) 2 = minft !, with y1:t = (y1, . . . , yt), y1:0 := KalmanFilter optimal solution among linear filters -- Kalman[/Bucy] [60
Shuttle filter study. Volume 1: Characterization and optimization of filtration devices
NASA Technical Reports Server (NTRS)
1974-01-01
A program to develop a new technology base for filtration equipment and comprehensive fluid particulate contamination management techniques was conducted. The study has application to the systems used in the space shuttle and space station projects. The scope of the program is as follows: (1) characterization and optimization of filtration devices, (2) characterization of contaminant generation and contaminant sensitivity at the component level, and (3) development of a comprehensive particulate contamination management plane for space shuttle fluid systems.
Particle Swarm Optimization (PSO) en modelos semi-analíticos de formación de galaxias
NASA Astrophysics Data System (ADS)
Ruiz, A. N.; Domínguez, M. J.; Padilla, N. D.; Cora, S. A.; García Lambas, D.; Tecce, T. E.; Gargiulo, I. D.; Muñoz Arancibia, A. M.
We present preliminary results of calibrations of a semi-analytic galaxy formation model performed using the Particle Swarm Optimization (PSO) technique. This method involves the exploration of the parameter space by random walks of a set of ``particles'' that share information between them. Thus, comparing the model results against a set of observables (e.g. luminosity functions, the relation between black hole mass and bulge mass, morphological fractions), the PSO method yields a set of best-fitting values for the free parameters of the model. FULL TEXT IN SPANISH
NASA Astrophysics Data System (ADS)
Abedi, Kambiz; Mirjalili, Seyed Mohammad
2015-03-01
Recently, majority of current research in the field of designing Phonic Crystal Waveguides (PCW) focus in extracting the relations between output slow light properties of PCW and structural parameters through a huge number of tedious non-systematic simulations in order to introduce better designs. This paper proposes a novel systematic approach which can be considered as a shortcut to alleviate the difficulties and human involvements in designing PCWs. In the proposed method, the problem of PCW design is first formulated as an optimization problem. Then, an optimizer is employed in order to automatically find the optimum design for the formulated PCWs. Meanwhile, different constraints are also considered during optimization with the purpose of applying physical limitations to the final optimum structure. As a case study, the structure of a Bragg-like Corrugation Slotted PCWs (BCSPCW) is optimized by using the proposed method. One of the most computationally powerful techniques in Computational Intelligence (CI) called Particle Swarm Optimization (PSO) is employed as an optimizer to automatically find the optimum structure for BCSPCW. The optimization process is done by considering five constraints to guarantee the feasibility of the final optimized structures and avoid band mixing. Numerical results demonstrate that the proposed method is able to find an optimum structure for BCSPCW with 172% and 100% substantial improvements in the bandwidth and Normalized Delay-Bandwidth Product (NDBP) respectively compared to the best current structure in the literature. Moreover, there is a time domain analysis at the end of the paper which verifies the performance of the optimized structure and proves that this structure has low distortion and attenuation simultaneously.
Filter and method of fabricating
Janney, Mark A.
2006-02-14
A method of making a filter includes the steps of: providing a substrate having a porous surface; applying to the porous surface a coating of dry powder comprising particles to form a filter preform; and heating the filter preform to bind the substrate and the particles together to form a filter.
Lee, Eon S; Zhu, Yifang
2014-02-18
Modern passenger vehicles are commonly equipped with cabin air filters but their filtration efficiency for ultrafine particle (UFP) is rather low. Although setting the vehicle ventilation system to recirculation (RC) mode can reduce in-cabin UFPs by ? 90%, passenger-exhaled carbon dioxide (CO2) can quickly accumulate inside the cabin. Using outdoor air (OA) mode instead can provide sufficient air exchange to prevent CO2 buildup, but in-cabin UFP concentrations would increase. To overcome this dilemma, we developed a simultaneous mitigation method for UFP and CO2 using high-efficiency cabin air (HECA) filtration in OA mode. Concentrations of UFP and other air pollutants were simultaneously monitored in and out of 12 different vehicles under 3 driving conditions: stationary, on local roadways, and on freeways. Under each experimental condition, data were collected with no filter, in-use original equipment manufacturer (OEM) filter, and two types of HECA filters. The HECA filters offered an average in-cabin UFP reduction of 93%, much higher than the OEM filters (? 50% on average). Throughout the measurements, the in-cabin CO2 concentration remained in the range of 620-930 ppm, significantly lower than the typical level of 2500-4000 ppm observed in the RC mode. PMID:24471775