For comprehensive and current results, perform a real-time search at Science.gov.

1

Differential evolution particle swarm optimization for digital filter design

In this paper, swarm and evolutionary algorithms have been applied for the design of digital filters. Particle swarm optimization (PSO) and differential evolution particle swarm optimization (DEPSO) have been used here for the design of linear phase finite impulse response (FIR) filters. Two different fitness functions have been studied and experimented, each having its own significance. The first study considers

Bipul Luitel; Ganesh K. Venayagamoorthy

2008-01-01

2

Clever particle filters, sequential importance sampling and the optimal proposal

NASA Astrophysics Data System (ADS)

Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.

Snyder, Chris

2014-05-01

3

Linear phase low pass FIR filter design using Improved Particle Swarm Optimization

In this paper, an optimal design of linear phase digital low pass finite impulse response (FIR) filter using Improved Particle Swarm Optimization (IPSO) has been presented. In the design process, the filter length, pass band and stop band frequencies, feasible pass band and stop band ripple sizes are specified. FIR filter design is a multi-modal optimization problem. The conventional gradient

Saptarshi Mukherjee; Rajib Kar; Durbadal Mandal; Sangeeta Mondal; S. P. Ghoshal

2011-01-01

4

A discrete particle swarm optimization technique (DPSO) for power filter design

In this paper, a novel optimization approach is developed to optimally solve the problem of power system shunt filter design based on discrete particle swarm optimization (DPSO) technique to ensure harmonic reduction and noise mitigation on the electrical utility grid. The proposed power filter design is based on the minimization of a multi objective function. The main power filter objective

Adel M. Sharaf; Adel A. A. El-Gammal

2009-01-01

5

Neuromuscular fiber segmentation through particle filtering and discrete optimization

NASA Astrophysics Data System (ADS)

We present an algorithm to segment a set of parallel, intertwined and bifurcating fibers from 3D images, targeted for the identification of neuronal fibers in very large sets of 3D confocal microscopy images. The method consists of preprocessing, local calculation of fiber probabilities, seed detection, tracking by particle filtering, global supervised seed clustering and final voxel segmentation. The preprocessing uses a novel random local probability filtering (RLPF). The fiber probabilities computation is performed by means of SVM using steerable filters and the RLPF outputs as features. The global segmentation is solved by discrete optimization. The combination of global and local approaches makes the segmentation robust, yet the individual data blocks can be processed sequentially, limiting memory consumption. The method is automatic but efficient manual interactions are possible if needed. The method is validated on the Neuromuscular Projection Fibers dataset from the Diadem Challenge. On the 15 first blocks present, our method has a 99.4% detection rate. We also compare our segmentation results to a state-of-the-art method. On average, the performances of our method are either higher or equivalent to that of the state-of-the-art method but less user interactions is needed in our approach.

Dietenbeck, Thomas; Varray, Franois; Kybic, Jan; Basset, Olivier; Cachard, Christian

2014-03-01

6

A novel recursive scheme to compute the global and robust optimal variable fractional delay (VFD) filters based on the Particle Swarm Optimization (PSO) is developed in this paper. If the PSO is directly used to compute an optimal VFD filter the particles with high dimension might be yielded, which could require a long convergence time. Our recursive scheme invokes only

Dongyan Sun; Jiaxiang Zhao; Xiaoming Zhao

2009-01-01

7

Particle Swarm Optimization with Quantum Infusion for the design of digital filters

In this paper, particle swarm optimization with quantum infusion (PSO-QI) has been applied for the design of digital filters. In PSO-QI, Global best (gbest) particle (in PSO star topology) obtained from particle swarm optimization is enhanced by doing a tournament with an offspring produced by quantum behaved PSO, and selecting the winner as the new gbest. Filters are designed based

Bipul Luitel; Ganesh Kumar Venayagamoorthy

2008-01-01

8

NASA Astrophysics Data System (ADS)

Image exploitation algorithms for Intelligence, Surveillance and Reconnaissance (ISR) and weapon systems are extremely sensitive to differences between the operating conditions (OCs) under which they are trained and the extended operating conditions (EOCs) in which the fielded algorithms are tested. As an example, terrain type is an important OC for the problem of tracking hostile vehicles from an airborne camera. A system designed to track cars driving on highways and on major city streets would probably not do well in the EOC of parking lots because of the very different dynamics. In this paper, we present a system we call ALPS for Adaptive Learning in Particle Systems. ALPS takes as input a sequence of video images and produces labeled tracks. The system detects moving targets and tracks those targets across multiple frames using a multiple hypothesis tracker (MHT) tightly coupled with a particle filter. This tracker exploits the strengths of traditional MHT based tracking algorithms by directly incorporating tree-based hypothesis considerations into the particle filter update and resampling steps. We demonstrate results in a parking lot domain tracking objects through occlusions and object interactions.

Stevens, Mark R.; Gutchess, Dan; Checka, Neal; Snorrason, Magns

2006-05-01

9

Particle Reentrainment from Fibrous Filters

When a respirator wearer breathes normally, airborne bacteria and particles may be collected by the filter medium of the respirator. If these particles are reentrained again by sneezing or by coughing during the exhalation cycle, they may reach other targets. To study this hypothesis, particle reentrainment from polymer and glass fiber filters was investigated by measuring the number of reentrained

Yinge Qian; Klaus Willeke; Vidmantas Ulevicius; Sergey A. Grinshpun

1997-01-01

10

Optimal stochastic fault detection filter

A fault detection and identification algorithm, called optimal stochastic fault detection filter, is determined. The objective of the filter is to detect a single fault, called the target fault, and block other faults, called the nuisance faults, in the presence of the process and sensor noises. The filter is derived by maximizing the transmission from the target fault to the

Robert H. Chen; D. Lewis Mingori; Jason L. Speyer

2003-01-01

11

Optimal stochastic fault detection filter

Properties of the optimal stochastic fault detection filter for fault detection and identification are determined. The objective of the filter is to monitor certain faults called target faults and block other faults which are called nuisance faults. This filter is derived by keeping the ratio of the transmission from nuisance fault to the transmission from target fault small. It is

Robert H. Chen; Jason L. Speyer

1999-01-01

12

Particle Filters for State Estimation of Jump Markov Linear Systems

Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively com- pute optimal state estimates for this class of systems. We present efficient simulation-based algorithms called particle filters to solve the optimal filtering problem as well as the optimal fixed-lag smoothing problem.

Arnaud Doucet; Neil J. Gordon; Vikram Krishnamurthy

1999-01-01

13

Particle filters for state estimation of jump Markov linear systems

Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively compute optimal state estimates for this class of systems. We present efficient simulation-based algorithms called particle filters to solve the optimal filtering problem as well as the optimal fixed-lag smoothing problem. Our

Arnaud Doucet; Neil J. Gordon; Vikram Krishnamurthy

2001-01-01

14

OPTIMIZATION OF ADVANCED FILTER SYSTEMS

Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based on the issues identified. The two advanced barrier filter systems have been found to have the potential to be significantly more reliable and less expensive to operate than standard ceramic candle filter system designs. Their key development requirements are the assessment of the design and manufacturing feasibility of the ceramic filter elements, and the small-scale demonstration of their conceptual reliability and availability merits.

R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert

1998-04-30

15

Design of Optimal Digital Filters

NASA Astrophysics Data System (ADS)

Four methods for designing digital filters optimal in the Chebyshev sense are developed. The properties of these filters are investigated and compared. An analytic method for designing narrow-band FIR filters using Zolotarev polynomials, which are extensions of Chebyshev polynomials, is proposed. Bandpass and bandstop narrow-band filters as well as lowpass and highpass filters can be designed by this method. The design procedure, related formulae and examples are presented. An improved method of designing optimal minimum phase FIR filters by directly finding zeros is proposed. The zeros off the unit circle are found by an efficient special purpose root-finding algorithm without deflation. The proposed algorithm utilizes the passband minimum ripple frequencies to establish the initial points, and employs a modified Newton's iteration to find the accurate initial points for a standard Newton's iteration. The proposed algorithm can be used to design very long filters (L = 325) with very high stopband attenuations. The design of FIR digital filters in the complex domain is investigated. The complex approximation problem is converted into a near equivalent real approximation problem. A standard linear programming algorithm is used to solve the real approximation problem. Additional constraints are introduced which allow weighting of the phase and/or group delay of the approximation. Digital filters are designed which have nearly constant group delay in the passbands. The desired constant group delay which gives the minimum Chebyshev error is found to be smaller than that of a linear phase filter of the same length. These filters, in addition to having a smaller, approximately constant group delay, have better magnitude characteristics than exactly linear phase filters with the same length. The filters have nearly equiripple magnitude and group delay. The problem of IIR digital filter design in the complex domain is formulated such that the existence of best approximation is guaranteed. An efficient and numerically stable algorithm for the design is proposed. The methods to establish a good initial point are investigated. Digital filters are designed which have nearly constant group delay in the passbands. The magnitudes of the filter poles near the passband edge are larger than of those far from the passband edge. A delay overshooting may occur in the transition band (don't care region), and it can be reduced by decreasing the maximum allowed pole magnitude of the design problem at the expense of increasing the approximation error.

Chen, Xiangkun

16

Particle flow for nonlinear filters with log-homotopy

NASA Astrophysics Data System (ADS)

We describe a new nonlinear filter that is vastly superior to the classic particle filter. In particular, the computational complexity of the new filter is many orders of magnitude less than the classic particle filter with optimal estimation accuracy for problems with dimension greater than 2 or 3. We consider nonlinear estimation problems with dimensions varying from 1 to 20 that are smooth and fully coupled (i.e. dense not sparse). The new filter implements Bayes' rule using particle flow rather than with a pointwise multiplication of two functions; this avoids one of the fundamental and well known problems in particle filters, namely "particle collapse" as a result of Bayes' rule. We use a log-homotopy to derive the ODE that describes particle flow. This paper was written for normal engineers, who do not have homotopy for breakfast.

Daum, Fred; Huang, Jim

2008-04-01

17

Distributed SLAM using improved particle filter for mobile robot localization.

The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness. PMID:24883362

Pei, Fujun; Wu, Mei; Zhang, Simin

2014-01-01

18

Distributed SLAM Using Improved Particle Filter for Mobile Robot Localization

The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness. PMID:24883362

Pei, Fujun; Wu, Mei; Zhang, Simin

2014-01-01

19

A survey of convergence results on particle filtering methods for practitioners

Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closed-form expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to. solve the optimal filtering problem numerically. The posterior distribution of the state is approximated by a large

Dan Crisan; Arnaud Doucet

2002-01-01

20

NASA Technical Reports Server (NTRS)

The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.

Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw

2002-01-01

21

A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life

James N. Kennedy; Russell C. Eberhart

1995-01-01

22

Westinghouse Advanced Particle Filter System

Integrated Gasification Combined Cycles (IGCC) and Pressurized Fluidized Bed Combustion (PFBC) are being developed and demonstrated for commercial, power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC and PFBC in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of PFBC and IGCC systems. This paper reports on the development and status of testing of the Westinghouse Advanced Hot Gas Particle Filter (W-APF) including: W-APF integrated operation with the American Electric Power, 70 MW PFBC clean coal facility--approximately 6000 test hours completed; approximately 2500 hours of testing at the Hans Ahlstrom 10 MW PCFB facility located in Karhula, Finland; over 700 hours of operation at the Foster Wheeler 2 MW 2nd generation PFBC facility located in Livingston, New Jersey; status of Westinghouse HGF supply for the DOE Southern Company Services Power System Development Facility (PSDF) located in Wilsonville, Alabama; the status of the Westinghouse development and testing of HGF`s for Biomass Power Generation; and the status of the design and supply of the HGF unit for the 95 MW Pinon Pine IGCC Clean Coal Demonstration.

Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.; Bachovchin, D.M. [Westinghouse Electric Corp., Pittsburgh, PA (United States). Science and Technology Center

1996-12-31

23

Bayesian Filtering: From Kalman Filters to Particle Filters, and Beyond

In this self-contained survey\\/review paper, we system- atically investigate the roots of Bayesian filtering as well as its rich leaves in the literature. Stochastic filtering theory is briefly reviewed with emphasis on nonlinear and non-Gaussian filtering. Following the Bayesian statistics, different Bayesian filtering techniques are de- veloped given different scenarios. Under linear quadratic Gaussian circumstance, the celebrated Kalman filter can

ZHE CHEN

24

Particle Swarm Optimization Toolbox

NASA Technical Reports Server (NTRS)

The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.

Grant, Michael J.

2010-01-01

25

Rickard Karlsson ISIS Particle Filtering in Practice

Rickard Karlsson ISIS 2004-11-04 Particle Filtering in Practice Sensor fusion, Positioning and Tracking Rickard Karlsson Automatic Control Linköping University, SWEDEN rickard@isy.liu.se #12;Rickard Karlsson ISIS Linköping 2004-11-05 Particle Filtering within ISIS from my perspective #12;Rickard Karlsson

Zhao, Yuxiao

26

Particle filters for positioning, navigation, and tracking

A framework for positioning, navigation, and tracking problems using particle filters (sequential Monte Carlo methods) is developed. It consists of a class of motion models and a general nonlinear measurement equation in position. A general algorithm is presented, which is parsimonious with the particle dimension. It is based on marginalization, enabling a Kalman filter to estimate all position derivatives, and

Fredrik Gustafsson; Fredrik Gunnarsson; Niclas Bergman; Urban Forssell; Jonas Jansson; Rickard Karlsson; Per-Johan Nordlund

2002-01-01

27

Optimal stochastic multiple-fault detection filter

A class of robust fault detection filters is generalized from detecting single fault to multiple faults. This generalization is called the optimal stochastic multiple-fault detection filter since in the formulation, the unknown fault amplitudes are modeled as white noise. The residual space of the filter is divided into several subspaces and each subspace is sensitive to only one fault (target

Robert H. Chen; Jason L. Speyer

1999-01-01

28

OPTIMIZATION OF ADVANCED FILTER SYSTEMS

Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program through cold flow and high-temperature testing. The Blasch, mullite-bonded alumina sheet filter element is the only candidate currently approaching qualification for demonstration, although this oxide-based, monolithic sheet filter element may be restricted to operating temperatures of 538 C (1000 F) or less. Many other types of ceramic and intermetallic sheet filter elements could be fabricated. The estimated capital cost of the sheet filter system is comparable to the capital cost of the standard candle filter system, although this cost estimate is very uncertain because the commercial price of sheet filter element manufacturing has not been established. The development of the sheet filter system could result in a higher reliability and availability than the standard candle filter system, but not as high as that of the inverted candle filter system. The sheet filter system has not reached the same level of development as the inverted candle filter system, and it will require more design development, filter element fabrication development, small-scale testing and evaluation before larger-scale testing could be recommended.

R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar

2002-06-30

29

Westinghouse advanced particle filter system

Integrated Gasification Combined Cycles (IGCC), Pressurized Fluidized Bed Combustion (PFBC) and Advanced PFBC (APFB) are being developed and demonstrated for commercial power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC, PFBC and APFB in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of these advanced, solid fuel power generation cycles.

Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.

1995-11-01

30

Westinghouse advanced particle filter system

Integrated Gasification Combined Cycles (IGCC) and Pressurized Fluidized Bed Combustion (PFBC) are being developed and demonstrated for commercial, power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC and PFBC in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of PFBC and IGCC systems. This paper updates the assessment of the Westinghouse hot gas filter design based on ongoing testing and analysis. Results are summarized from recent computational fluid dynamics modeling of the plenum flow during back pulse, analysis of candle stressing under cleaning and process transient conditions and testing and analysis to evaluate potential flow induced candle vibration.

Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.

1994-10-01

31

Optimal approximation algorithms for digital filter design

NASA Astrophysics Data System (ADS)

Several new algorithms are presented for the optimal approximation and design of various classes of digital filters. An iterative algorithm is developed for the efficient design of unconstrained and constrained infinite impulse response (IIR) digital filters. Both in the unconstrained and constrained cases, the numerator and denominator of the filter transfer function are designed iteratively by recourse to the Remez algorithm and to appropriate design parameters and criteria, at each iteration. This makes it possible for the algorithm to be implemented by means of a short main program which uses (at each iteration) the linear phase FIR filter design algorithm of McClellan et al. as a subroutine. The approach taken also permits the filter to be designed with a desired ripple ratio. Also, the algorithm determines automatically the minimum passband ripple corresponding to the prescribed orders and band edges of the filter. The filter is designed directly without guessing the passband ripple or stopband ripple.

Liang, J. K.

32

Distance estimation using RSSI and particle filter.

This paper presents a particle filter algorithm for distance estimation using multiple antennas on the receiver's side and only one transmitter, where a received signal strength indicator (RSSI) of radio frequency was used. Two different placements of antennas were considered (parallel and circular). The physical layer of IEEE standard 802.15.4 was used for communication between transmitter and receiver. The distance was estimated as the hidden state of a stochastic system and therefore a particle filter was implemented. The RSSI acquisitions were used for the computation of important weights within the particle filter algorithm. The weighted particles were re-sampled in order to ensure proper distribution and density. Log-normal and ground reflection propagation models were used for the modeling of a prior distribution within a Bayesian inference. PMID:25457044

Sve?ko, Janja; Malajner, Marko; Gleich, Duan

2015-03-01

33

Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks Arnaud Doucet

Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks Arnaud Doucet Engineering the efficiency of parti- cle filtering, using a technique known as Rao- Blackwellisation. Essentially, junction tree algorithm, or any other finite di- mensional optimal filter. We show that Rao- Blackwellised

Murphy, Kevin Patrick

34

Optimal multiobjective design of digital filters using spiral optimization technique.

The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108

Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

2013-01-01

35

Testing particle filters on convective scale dynamics

NASA Astrophysics Data System (ADS)

Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Wrsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Wrsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Wrsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Wrsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.

Haslehner, Mylene; Craig, George. C.; Janjic, Tijana

2014-05-01

36

Optimal Approximation Algorithms for Digital Filter Design.

NASA Astrophysics Data System (ADS)

Several new algorithms are presented for the optimal approximation and design of various classes of digital filters. An iterative algorithm is developed for the efficient design of unconstrained and constrained infinite impulse response (IIR) digital filters. Both in the unconstrained and constrained cases, the numerator and denominator of the filter transfer function are designed iteratively by recourse to the Remez algorithm and to appropriate design parameters and criteria, at each iteration. This makes it possible for the algorithm to be implemented by means of a short main program which uses (at each iteration) the linear phase FIR filter design algorithm of McClellan et al. as a subroutine. The approach taken also permits the filter to be designed with a desired ripple ratio. Also, the algorithm determines automatically the minimum passband ripple corresponding to the prescribed orders and band edges of the filter. The filter is designed directly without guessing the passband ripple or stopband ripple. Another algorithm, based on similar principles, is developed for the design of a nonlinear phase finite impulse response (FIR) filter, whose transfer function optimally approximates a desired magnitude response, there being no constraints imposed on the phase response. A similar algorithm is presented for the design of two new classes of FIR digital filters, one linear phase and the other nonlinear phase. A filter of either class has significantly reduced number of multiplications compared to the one obtained by its conventional counterpart, with respect to a given frequency response. In the case of linear phase, by introducing the new class of digital filters into the design of multistage decimators and interpolators for narrow-band filter implementation, it is found that an efficient narrow-band filter requiring considerably lower multiplication rate than the conventional linear phase FIR design can be obtained. The amount of data storage required by the new class of nonlinear phase FIR filters is significantly less than its linear phase counterpart. Finally, the design of a (finite-impulse-response) FIR digital filter with some of the coefficients constrained to zero is formulated as a linear programming (LP) problem and the LP technique is then used to design this class of constrained FIR digital filters. . . . (Author's abstract exceeds stipulated maximum length. Discontinued here with permission of author.) UMI.

Liang, Junn-Kuen

37

Optimal design of active EMC filters

NASA Astrophysics Data System (ADS)

A recent trend in automotive industry is adding electrical drive systems to conventional drives. The electrification allows an expansion of energy sources and provides great opportunities for environmental friendly mobility. The electrical powertrain and its components can also cause disturbances which couple into nearby electronic control units and communication cables. Therefore the communication can be degraded or even permanently disrupted. To minimize these interferences, different approaches are possible. One possibility is to use EMC filters. However, the diversity of filters is very large and the determination of an appropriate filter for each application is time-consuming. Therefore, the filter design is determined by using a simulation tool including an effective optimization algorithm. This method leads to improvements in terms of weight, volume and cost.

Chand, B.; Kut, T.; Dickmann, S.

2013-07-01

38

MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

NASA Technical Reports Server (NTRS)

The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.

Barton, R. S.

1994-01-01

39

Fast and Accurate SLAM with Rao-Blackwellized Particle Filters

Fast and Accurate SLAM with Rao-Blackwellized Particle Filters Giorgio Grisetti a,b Gian Diego, Switzerland Abstract Rao-Blackwellized particle filters have become a popular tool to solve the simultaneous approach to mapping with Rao-Black- wellized particle filters. Moreover, it provides a compact map model

Stachniss, Cyrill

40

Field of Particle Filters for Image Inpainting

We present a novel algorithm for solving the image inpainting problem based on a field of locally interacting particle filters.\\u000a Image inpainting, also known as image completion, is concerned with the problem of filling image regions with new visually\\u000a plausible data. In order to avoid the difficulty of solving the problem globally for the region to be inpainted, we introduce

Anne Cuzol; Kim Steenstrup Pedersen; Mads Nielsen

2008-01-01

41

Bayesian Model Averaging Using Ensemble Particle Filtering

NASA Astrophysics Data System (ADS)

Conceptual watershed models are a valuable tool for streamflow prediction, but it is also acknowledged that no single model structure can capture all the details of a watershed. Therefore, ensembles of models are employed, and Bayesian model averaging (BMA) is increasingly being used to combine the predictions of multiple different models into a single forecast that is supposed to exhibit better predictive capability then any of the individual models. Successful implementation of BMA depends on the choice of the conditional distribution used to specify uncertainty of each ensemble member. Most often this distribution is assumed Gaussian. Here we introduce a four step approach that retrieves the conditional distribution for each model and time. First, we create a suite of watershed models by calibrating one conceptual model to different parts of the hydrograph. Then, a particle filter is used for each model to recursively derive the posterior probability density function of streamflow. The particle filter explictly incorporates uncertainty in measurement and model states. Then, a cross-entropy method is employed to retrieve closed form mathematical descriptions of these respective probability distributions. Finally, the BMA weights are estimated from these closed-form distributions using the DREAM algorithm. For the extremely diverse suite of watershed models, the RMSE for the BMA model is not necessarily better then that of the single best model. The treatment of model and measurement uncertainties in the particle filter, however, allows much better predictions than the calibrated models alone can provide.

Rings, J.; Vrugt, J. A.; Huisman, J. A.; Schoups, G.; Vereecken, H.

2010-12-01

42

Point Set Registration via Particle Filtering and Stochastic Dynamics

In this paper, we propose a particle filtering approach for the problem of registering two point sets that differ by a rigid body transformation. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in pose parameters obtained by running a few iterations of a certain local optimizer. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer approaches for registration. Thus, the novelty of our method is threefold: First, we employ a particle filtering scheme to drive the point set registration process. Second, we present a local optimizer that is motivated by the correlation measure. Third, we increase the robustness of the registration performance by introducing a dynamic model of uncertainty for the transformation parameters. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity (with respect to particle size) as well as maintains the temporal coherency of the state (no loss of information). Also unlike some alternative approaches for point set registration, we make no geometric assumptions on the two data sets. Experimental results are provided that demonstrate the robustness of the algorithm to initialization, noise, missing structures, and/or differing point densities in each set, on several challenging 2D and 3D registration scenarios. PMID:20558877

Sandhu, Romeil; Dambreville, Samuel; Tannenbaum, Allen

2013-01-01

43

An improved particle filter for non-linear problems

Abstract: The Kalman filter provides an effective solution to the linear-Gaussian filtering problem. However,where there is nonlinearity, either in the model specification or the observation process, othermethods are required. We consider methods known generically as particle filters, which include thecondensation algorithm and the Bayesian bootstrap or sampling importance resampling (SIR) filter.

J. Carpenter; P. Clifford; P. Fearnhead

1997-01-01

44

An Improved Particle Filter for Non-linear Problems

The Kalman filter provides an effective solution to the linear-Gaussian fil tering problem. How- ever, where there is nonlinearity, either in the model specification or the observation process, other methods are required. We consider methods known generically as particle filters, which include the condensation algorithm and the Bayesian bootstrap or sampling importance resampling (SIR) filter. These filters represent the posterior

James Carpenter; Peter Clifford; Paul Fearnhead

1999-01-01

45

ASYMPTOTICS OF OPTIMAL FILTERS 1 The Asymptotics of Optimal (Equiripple)

and ffi s has been a secret for more than twenty years. This paper is aimed to solve this mystery. We to replace Kaiser's empirical formula. Kaiser also discovered a nearly optimal family of filters based] for this family. The constant in the denominator becomes slightly smaller, which increases N . This family

Strang, Gilbert

46

Particle swarm optimization in electromagnetics

The particle swarm optimization (PSO), new to the electromagnetics community, is a robust stochastic evolutionary computation technique based on the movement and intelligence of swarms. This paper introduces a conceptual overview and detailed explanation of the PSO algorithm, as well as how it can be used for electromagnetic optimizations. This paper also presents several results illustrating the swarm behavior in

Jacob Robinson; Yahya Rahmat-Samii

2004-01-01

47

Program Computes SLM Inputs To Implement Optimal Filters

NASA Technical Reports Server (NTRS)

Minimum Euclidean Distance Optimal Filter (MEDOF) program generates filters for use in optical correlators. Analytically optimizes filters on arbitrary spatial light modulators (SLMs) of such types as coupled, binary, fully complex, and fractional-2pi-phase. Written in C language.

Barton, R. Shane; Juday, Richard D.; Alvarez, Jennifer L.

1995-01-01

48

Design of Digital Filters and Filter Banks by Optimization: A State of the Art Review

Design of Digital Filters and Filter Banks by Optimization: A State of the Art Review W.-S. Lu as described below. For the sake of simplicity, we consider the problem of de- signing a linear-phase, lowpass

Lu, Wu-Sheng

49

Research on fuzzy robust adaptive unscented particle filtering

NASA Astrophysics Data System (ADS)

This paper present a new fuzzy robust adaptive Unscented particle filtering method based on the fuzzy control theory. This method absorbs the advantages of the fuzzy control theory, the robust adaptive filtering and the Unscented particle filtering. Using the influence of the gross errors in the observation vectors on the state vector parameters to obtain the robust adaptive Unscented particle filtering model. Experiment results and comparison analysis demonstrate that this proposed methodology provides an effective solution for improving the positioning accuracy in navigation system.

Gao, Yi; Gao, Shesheng

2011-10-01

50

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Rao-Blackwellized Particle Filter for Multiple Target Tracking Simo S¨arkk¨a , Aki Vehtari, Jouko Lampinen Helsinki University of Technology, Finland Abstract In this article we propose a new Rao or particle filtering, and the efficiency of the Monte Carlo sampling is improved by using Rao

Särkkä, Simo

51

Human-Manipulator Interface Using Particle Filter

This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator. PMID:24757430

Wang, Xueqian

2014-01-01

52

Human-manipulator interface using particle filter.

This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator. PMID:24757430

Du, Guanglong; Zhang, Ping; Wang, Xueqian

2014-01-01

53

Some issues and results on the EnKF and particle filters for meteorological models

Some issues and results on the EnKF and particle filters for meteorological models Chaos 2009KF and particle filters for meteorological models #12;The nonlinear filtering problem Particle Filter resolution C. Baehr & O. Pannekoucke EnKF and particle filters for meteorological models #12;2 / 26 Nonlinear

Baehr, Christophe

54

In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453

Cheng, Wen-Chang

2012-01-01

55

Hybrid weighted interacting particle filter for multitarget tracking

NASA Astrophysics Data System (ADS)

A hybrid weighted interacting particle filter, the selectively resampling particle filter (SERP), is used to detect and track multiple ships maneuvering in a region of water. The ship trajectories exhibit nonlinear dynamics and interact in a nonlinear manner such that the ships do not collide. There is no prior knowledge on the number of ships in the region. The observations model a sensor tracking the ships from above the region, as in a low observable SAR or infrared problem. The SERP filter simulates particles to provide the approximated conditional distribution of the signal in the signal domain at a particular time, given the sequence of observations. After each observation, the hybrid filter uses selective resampling to move some particles with low weights to locations that have a higher likelihood of being correct, without resampling all particles or creating bias. Such a method is both easy to implement and highly computationally efficient. Quantitative results recording the capacity of the filter to determine the number of ships in the region and the location of each ship are presented. Thy hybrid filter is compared against an earlier particle filtering method.

Ballantyne, David J.; Hailes, Jarett; Kouritizin, Michael A.; Long, Hongwei; Wiersma, Jonathan H.

2003-08-01

56

Recovering Particle Diversity in a Rao-Blackwellized Particle Filter for SLAM

Recovering Particle Diversity in a Rao-Blackwellized Particle Filter for SLAM After Actively. In this paper, we present an approach to actively closing loops during exploration. It applies a Rao to mapping with Rao-Blackwellized particle filters. I. INTRODUCTION Simultaneous localization and mapping

Burgard, Wolfram

57

A Vertical Gyro Model Based on Particle Filters

This paper presents an innovative application of the particle filtering techniques to the vertical gyro problem, which is a highly nonlinear and non Gaussian recursive state estimation problem. The classical approach to recursive state estimation problem of nonlinear and\\/or non Gaussian systems is based on the extended Kalman filter (EKF), which is a heuristic method based on the linearization of

C. Piro; D. Accardo

2007-01-01

58

PARTICLE APPROXIMATIONS TO THE FILTERING PROBLEM IN CONTINUOUS TIME

PARTICLE APPROXIMATIONS TO THE FILTERING PROBLEM IN CONTINUOUS TIME JIE XIONG Abstract system approxima- tion, Zakai equation, nonlinear filtering. Research of Xiong is supported partially by NSA. 1 #12;2 JIE XIONG process is an m-dimensional stochastic process given by Yt = t 0 h(Xs)ds + Wt

Xiong, Jie

59

A hybrid method for optimization of the adaptive Goldstein filter

NASA Astrophysics Data System (ADS)

The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

2014-12-01

60

Forward-looking infrared 3D target tracking via combination of particle filter and SIFT

NASA Astrophysics Data System (ADS)

Aiming at the problem of tracking 3D target in forward-looking infrared (FLIR) image, this paper proposes a high-accuracy robust tracking algorithm based on SIFT and particle filter. The main contribution of this paper is the proposal of a new method of estimating the affine transformation matrix parameters based on Monte Carlo methods of particle filter. At first, we extract SIFT features on infrared image, and calculate the initial affine transformation matrix with optimal candidate key points. Then we take affine transformation parameters as particles, and use SIR (Sequential Importance Resampling) particle filter to estimate the best position, thus implementing our algorithm. The experiments demonstrate that our algorithm proves to be robust with high accuracy.

Li, Xing; Cao, Zhiguo; Yan, Ruicheng; Li, Tuo

2013-10-01

61

Probabilistic-based approach to optimal filtering

The signal-to-noise ratio maximizing approach in optimal filtering provides a robust tool to detect signals in the presence of colored noise. The method fails, however, when the data present a regimelike behavior. An approach is developed in this manuscript to recover local (in phase space) behavior in an intermittent regimelike behaving system. The method is first formulated in its general form within a Gaussian framework, given an estimate of the noise covariance, and demands that the signal corresponds to minimizing the noise probability distribution for any given value, i.e., on isosurfaces, of the data probability distribution. The extension to the non-Gaussian case is provided through the use of finite mixture models for data that show regimelike behavior. The method yields the correct signal when applied in a simplified manner to synthetic time series with and without regimes, compared to the signal-to-noise ratio approach, and helps identify the right frequency of the oscillation spells in the classical and variants of the Lorenz system. PMID:11088139

Hannachi

2000-04-01

62

Dynamic and Adjustable Particle Swarm Optimization

Particle Swarm Optimization (PSO) is a stochastic, population-based evolutionary search technique. It has difficulties in controlling the balance between exploration and exploitation. In order to improve the performance of PSO and maintain the diversities of particles, we propose a novel algorithm called Dynamic and Adjustable Particle Swarm Optimization (DAPSO). The distance from each particle to the global best position is

Chen-Yi Liao; Wei-Ping Lee; Xianghan Chen; Cheng-Wen Chiang

2007-01-01

63

Numerical Methods for Globally Optimal Adaptive IIR Filtering

This paper explores the potential for (i) developing globally optimal adaptive IIR filtering algorithms using numerical global optimization methods and (ii) proving absolute convergence of existing algorithms using analytical results available for these global optimization methods. The primary objective of this work is to overcome the performance losses incurred due to convergence to local minima or to suboptimal equation error

Virginia L. Stonick; S. T. Alexander

1990-01-01

64

Design of different types of digital FIR filter is of paramount significance in various Digital Signal Processing (DSP) applications. Different optimization techniques can judiciously be utilized to determine the impulse response coefficients of such a filter. These optimization techniques may include some conventional processes such as Convex or Non-convex optimization methods or some evolutionary algorithms such as Genetic Algorithm (GA),

S. Chattopadhyay; S. K. Sanyal; A. Chandra

2010-01-01

65

Genetic particle filter application to land surface temperature downscaling

NASA Astrophysics Data System (ADS)

Thermal infrared data are widely used for surface flux estimation giving the possibility to assess water and energy budgets through land surface temperature (LST). Many applications require both high spatial resolution (HSR) and high temporal resolution (HTR), which are not presently available from space. It is therefore necessary to develop methodologies to use the coarse spatial/high temporal resolutions LST remote-sensing products for a better monitoring of fluxes at appropriate scales. For that purpose, a data assimilation method was developed to downscale LST based on particle filtering. The basic tenet of our approach is to constrain LST dynamics simulated at both HSR and HTR, through the optimization of aggregated temperatures at the coarse observation scale. Thus, a genetic particle filter (GPF) data assimilation scheme was implemented and applied to a land surface model which simulates prior subpixel temperatures. First, the GPF downscaling scheme was tested on pseudoobservations generated in the framework of the study area landscape (Crau-Camargue, France) and climate for the year 2006. The GPF performances were evaluated against observation errors and temporal sampling. Results show that GPF outperforms prior model estimations. Finally, the GPF method was applied on Spinning Enhanced Visible and InfraRed Imager time series and evaluated against HSR data provided by an Advanced Spaceborne Thermal Emission and Reflection Radiometer image acquired on 26 July 2006. The temperatures of seven land cover classes present in the study area were estimated with root-mean-square errors less than 2.4 K which is a very promising result for downscaling LST satellite products.

Mechri, Rihab; Ottl, Catherine; Pannekoucke, Olivier; Kallel, Abdelaziz

2014-03-01

66

Particle filter for long range radar in RUV

NASA Astrophysics Data System (ADS)

In this paper we present an approach for tracking with a high-bandwidth active radar in long range scenarios with 3-D measurements in r-u-v coordinates. The 3-D low-process-noise scenarios considered are much more difficult than the ones we have previously investigated where measurements were in 2-D (i.e., polar coordinates). We show that in these 3-D scenarios the extended Kalman filter and its variants are not desirable as they suffer from either major consistency problems or degraded range accuracy, and most flavors of particle filter suffer from a loss of diversity among particles after resampling. This leads to sample impoverishment and divergence of the filter. In the scenarios studied, this loss of diversity can be attributed to the very low process noise. However, a regularized particle filter is shown to avoid this diversity problem while producing consistent results. The regularization is accomplished using a modified version of the Epanechnikov kernel.

Romeo, Kevin; Willett, Peter; Bar-Shalom, Yaakov

2014-06-01

67

Optimized Kalman filter versus rigorous method in deformation analysis

NASA Astrophysics Data System (ADS)

Kalman filtering is a multiple-input, multiple-output filter that can optimally estimate the states of a system, and applicable for deformation analysis. The states are all the variables needed to completely describe the system behavior of the deformation process as a function of time (such as position, velocity etc.). The standard Kalman filter estimates the state vector where the measuring process is described by a linear system. In order to process a non-linear system an optimized aspect of Kalman filter is required. The main purpose of this research is to evaluate the optimized Kalman filter (OKF) as a non-robust method versus the iterative weighted similarity transformation (IWST) as a rigorous (also called robust) method. To satisfy this objective, first a detailed description on executing the optimized Kalman filter using the observation of angles and distances directly is provided. Then, 2-D total station data observations comprising distances and angles are used to demonstrate the OKF. For detecting the deformation, a real point-related test (single point test) is applied for every point as a local test. Consequently, the findings from OKF are compared and evaluated against the results from the IWST method. In general, the outcome of the Kalman filter algorithm is close to the preliminary results from the IWST method. The maximum and minimum differences in computed displacements are 0.2 and 2 millimeters respectively. Finally, Kalman filter approaches, having some properties, are recognized as suitable techniques for deformation analysis.

Aharizad, Nezhla; Setan, Halim; Lim, Mengchan

2012-11-01

68

Optimization of multiplexed holographic gratings for spectralspatial imaging filters

gratings and offer high angularspectral selectivity and the ability to multiplex multiple gratingsOptimization of multiplexed holographic gratings in PQ-PMMA for spectralspatial imaging filters transmittance filtering properties. We present the design and performance of angle-multiplexed holographic

Barton, Jennifer K.

69

COMPUTATIONS ON THE PERFORMANCE OF PARTICLE FILTERS AND ELECTRONIC AIR CLEANERS

The paper discusses computations on the performance of particle filters and electronic air cleaners (EACs). The collection efficiency of particle filters and ACs is calculable if certain factors can be assumed or calibrated. For fibrous particulate filters, measurement of colle...

70

COMPUTATIONS ON THE PERFORMANCE OF PARTICLE FILTERS AND ELECTRONIC AIR CLEANERS

The paper discusses computations on the performance of particle filters and electronic air cleaners (EACs). he collection efficiency of particle filters and ACs is calculable if certain factors can be assumed or calibrated. or fibrous particulate filters, measurement of collectio...

71

Seeker Optimization Algorithm for Digital IIR Filter Design

Since the error surface of digital infinite-impulse-response (IIR) filters is generally nonlinear and multimodal, global optimization techniques are required in order to avoid local minima. In this paper, a seeker-optimization-algorithm (SOA)-based evolutionary method is proposed for digital IIR filter design. SOA is based on the concept of simulating the act of human searching in which the search direction is based

Chaohua Dai; Weirong Chen; Yunfang Zhu

2010-01-01

72

A Tutorial on Particle Filtering and Smoothing: Fifteen years later

A Tutorial on Particle Filtering and Smoothing: Fifteen years later Arnaud Doucet The Institute. The objective of this tutorial is to provide a complete, up-to-date survey of this field as of 2008. Basic of particularly simple cases. The "particle" methods described by this tutorial are a broad and popular class

Del Moral , Pierre

73

Parallel asynchronous particle swarm optimization

SUMMARY The high computational cost of complex engineering optimization problems has motivated the development of parallel optimization algorithms. A recent example is the parallel particle swarm optimization (PSO) algorithm, which is valuable due to its global search capabilities. Unfortunately, because existing parallel implementations are synchronous (PSPSO), they do not make efficient use of computational resources when a load imbalance exists. In this study, we introduce a parallel asynchronous PSO (PAPSO) algorithm to enhance computational efficiency. The performance of the PAPSO algorithm was compared to that of a PSPSO algorithm in homogeneous and heterogeneous computing environments for small- to medium-scale analytical test problems and a medium-scale biomechanical test problem. For all problems, the robustness and convergence rate of PAPSO were comparable to those of PSPSO. However, the parallel performance of PAPSO was significantly better than that of PSPSO for heterogeneous computing environments or heterogeneous computational tasks. For example, PAPSO was 3.5 times faster than was PSPSO for the biomechanical test problem executed on a heterogeneous cluster with 20 processors. Overall, PAPSO exhibits excellent parallel performance when a large number of processors (more than about 15) is utilized and either (1) heterogeneity exists in the computational task or environment, or (2) the computation-to-communication time ratio is relatively small. PMID:17224972

Koh, Byung-Il; George, Alan D.; Haftka, Raphael T.; Fregly, Benjamin J.

2006-01-01

74

Optimization of tunable silicon compatible microring filters

Microring resonators can be used as pass-band filters for wavelength division demultiplexing in electronic-photonic integrated circuits for applications such as analog-to-digital converters (ADCs). For high quality signal ...

Amatya, Reja

2008-01-01

75

NASA Astrophysics Data System (ADS)

Realistic SEM image based 3D filter model considering transition/free molecular flow regime, Brownian diffusion, aerodynamic slip, particle-fiber and particle-particle interactions together with a novel Euclidian distance map based methodology for the pressure drop calculation has been utilized for a polyurethane nanofiber based filter prepared via electrospinning process in order to more deeply understand the effect of particle-fiber friction coefficient on filter clogging and basic filter characteristics. Based on the performed theoretical analysis, it has been revealed that the increase in the fiber-particle friction coefficient causes, firstly, more weaker particle penetration in the filter, creation of dense top layers and generation of higher pressure drop (surface filtration) in comparison with lower particle-fiber friction coefficient filter for which deeper particle penetration takes place (depth filtration), secondly, higher filtration efficiency, thirdly, higher quality factor and finally, higher quality factor sensitivity to the increased collected particle mass. Moreover, it has been revealed that even if the particle-fiber friction coefficient is different, the cake morphology is very similar.

Sambaer, Wannes; Zatloukal, Martin; Kimmer, Dusan

2013-04-01

76

Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

Troncoso Romero, David Ernesto

2014-01-01

77

An adaptive color-based particle filter

Abstract Robust real-time tracking of non-rigid objects is a challenging task. Particle ltering has proven very successful for non-linear and non-Gaussian estimation problems. The article presents the integration of color distributions into particle ltering, which has typically been used in combination with edge-based image features. Color distributions are applied as they are robust to partial occlusion, are rotation and scale

Katja Nummiaro; Esther Koller-meier; Luc J. Van Gool

2003-01-01

78

Effects of particle size and velocity on burial depth of airborne particles in glass fiber filters

Air sampling for particulate radioactive material involves collecting airborne particles on a filter and then determining the amount of radioactivity collected per unit volume of air drawn through the filter. The amount of radioactivity collected is frequently determined by directly measuring the radiation emitted from the particles collected on the filter. Counting losses caused by the particle becoming buried in the filter matrix may cause concentrations of airborne particulate radioactive materials to be underestimated by as much as 50%. Furthermore, the dose calculation for inhaled radionuclides will also be affected. The present study was designed to evaluate the extent to which particle size and sampling velocity influence burial depth in glass-fiber filters. Aerosols of high-fired /sup 239/PuO/sub 2/ were collected at various sampling velocities on glass-fiber filters. The fraction of alpha counts lost due to burial was determined as the ratio of activity detected by direct alpha count to the quantity determined by photon spectrometry. The results show that burial of airborne particles collected on glass-fiber filters appears to be a weak function of sampling velocity and particle size. Counting losses ranged from 0 to 25%. A correction that assumes losses of 10 to 15% would ensure that the concentration of airborne alpha-emitting radionuclides would not be underestimated when glass-fiber filters are used. 32 references, 21 figures, 11 tables.

Higby, D.P.

1984-11-01

79

Development of Golden Section Search Driven Particle Swarm Optimization and its Application

The particle swarm optimization (PSO), although it has been widely used in various fields, has a step-size problem, which deteriorates optimization performance. This problem is resolved using the golden section search (GSS) and the steepest descent method. We also design a filter that will improve optimization performance of the proposed algorithm. The effectiveness of the proposed algorithm, including for which

S. Oh; Y. Hori

2006-01-01

80

Sequential Bearings-Only-Tracking Initiation with Particle Filtering Method

The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramr-Rao bounds are also involved for performance evaluation. PMID:24453865

Hao, Chengpeng

2013-01-01

81

A NEW FORMULATION OF THE RAO-BLACKWELLIZED PARTICLE FILTER Gustaf Hendeby, Rickard Karlsson University, Sweden ABSTRACT The standard formulation of the Rao-Blackwellized particle filter (RBPF will be exemplified in a simulation study. Index Terms-- Particle filter, Rao-Blackwellisation, Kalman filter, Object

Gustafsson, Fredrik

82

Nonlinear Statistical Signal Processing: A Particle Filtering Approach

A introduction to particle filtering is discussed starting with an overview of Bayesian inference from batch to sequential processors. Once the evolving Bayesian paradigm is established, simulation-based methods using sampling theory and Monte Carlo realizations are discussed. Here the usual limitations of nonlinear approximations and non-gaussian processes prevalent in classical nonlinear processing algorithms (e.g. Kalman filters) are no longer a restriction to perform Bayesian inference. It is shown how the underlying hidden or state variables are easily assimilated into this Bayesian construct. Importance sampling methods are then discussed and shown how they can be extended to sequential solutions implemented using Markovian state-space models as a natural evolution. With this in mind, the idea of a particle filter, which is a discrete representation of a probability distribution, is developed and shown how it can be implemented using sequential importance sampling/resampling methods. Finally, an application is briefly discussed comparing the performance of the particle filter designs with classical nonlinear filter implementations.

Candy, J

2007-09-19

83

Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

NASA Astrophysics Data System (ADS)

We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

Bruno, Marcelo G. S.; Dias, Stiven S.

2014-12-01

84

Design of optimal correlation filters for hybrid vision systems

NASA Technical Reports Server (NTRS)

Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.

Rajan, Periasamy K.

1990-01-01

85

Microscopical examination of particles on smoked cigarette filters

Cigarette butts collected from crime scenes can play an important role in forensic investigations by providing a DNA link\\u000a to a victim or suspect. Microscopic particles can frequently be seen on smoked cigarette filters with stereomicroscopy. The\\u000a authors are not aware of previous published attempts to identify this material. These particles were examined with transmission\\u000a and scanning electron microscopy and

Charles A. Linch; Joseph A. Prahlow

2008-01-01

86

Particle Swarm Optimization for Integer Programming

Particle Swarm Optimization for Integer Programming E.C. Laskari, K.E. Parsopoulos and M of the performance of the Particle Swarm Optimization (PSO) method in Integer Programming problems, is the main theme, on several Integer Programming test prob- lems. Results indicate that PSO handles e ciently such problems

Parsopoulos, Konstantinos

87

Human tremor analysis using particle swarm optimization

The paper presents methods for the analysis of human tremor using particle swarm optimization. Two forms of human tremor are addressed: essential tremor and Parkinson's disease. Particle swarm optimization is used to evolve a neural network that distinguishes between normal subjects and those with tremor. Inputs to the neural network are normalized movement amplitudes obtained from an actigraph system. The

Russell C. Eberhart; Xiaohui Hu

1999-01-01

88

Particle Swarn Optimized Adaptive Dynamic Programming

Particle swarm optimization is used for the training of the action network and critic network of the adaptive dynamic programming approach. The typical structures of the adaptive dynamic programming and particle swarm optimization are adopted for comparison to other learning algorithms such as gradient descent method. Besides simulation on the balancing of a cart pole plant, a more complex plant

Dongbin Zhao; Jianqiang Yi; Derong Liu

2007-01-01

89

Optimal filtering methods to structural damage estimation under ground excitation.

This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

2013-01-01

90

Optimal Filtering Methods to Structural Damage Estimation under Ground Excitation

This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

2013-01-01

91

Optimally smooth symmetric quadrature mirror filters for image coding

NASA Astrophysics Data System (ADS)

Symmetric quadrature mirror filters (QMFs) offer several advantages for wavelet-based image coding. Symmetry and odd-length contribute to efficient boundary handling and preservation of edge detail. Symmetric QMFs can be obtained by mildly relaxing the filter bank orthogonality conditions. We describe a computational algorithm for these filter banks which is also symmetric in the sense that the analysis and synthesis operations have identical implementations, up to a delay. The essence of a wavelet transform is its multiresolution decomposition, obtained by iterating the lowpass filter. This allows one to introduce a new design criterion, smoothness (good behavior) of the lowpass filter under iteration. This design constraint can be expressed solely in terms of the lowpass filter tap values (via the eigenvalue decomposition of a certain finite-dimensional matrix). Our innovation is to design near- orthogonal QMFs with linear-phase symmetry which are optimized for smoothness under iteration, not for stopband rejection. The new class of optimally smooth QMF filter banks yields high performance in a practical image compression system.

Heller, Peter N.; Shapiro, Jerome M.; Wells, Raymond O., Jr.

1995-04-01

92

Optimal Recursive Digital Filters for Active Bending Stabilization

NASA Technical Reports Server (NTRS)

In the design of flight control systems for large flexible boosters, it is common practice to utilize active feedback control of the first lateral structural bending mode so as to suppress transients and reduce gust loading. Typically, active stabilization or phase stabilization is achieved by carefully shaping the loop transfer function in the frequency domain via the use of compensating filters combined with the frequency response characteristics of the nozzle/actuator system. In this paper we present a new approach for parameterizing and determining optimal low-order recursive linear digital filters so as to satisfy phase shaping constraints for bending and sloshing dynamics while simultaneously maximizing attenuation in other frequency bands of interest, e.g. near higher frequency parasitic structural modes. By parameterizing the filter directly in the z-plane with certain restrictions, the search space of candidate filter designs that satisfy the constraints is restricted to stable, minimum phase recursive low-pass filters with well-conditioned coefficients. Combined with optimal output feedback blending from multiple rate gyros, the present approach enables rapid and robust parametrization of autopilot bending filters to attain flight control performance objectives. Numerical results are presented that illustrate the application of the present technique to the development of rate gyro filters for an exploration-class multi-engined space launch vehicle.

Orr, Jeb S.

2013-01-01

93

Localization using omnivision-based manifold particle filters

NASA Astrophysics Data System (ADS)

Developing precise and low-cost spatial localization algorithms is an essential component for autonomous navigation systems. Data collection must be of sufficient detail to distinguish unique locations, yet coarse enough to enable real-time processing. Active proximity sensors such as sonar and rangefinders have been used for interior localization, but sonar sensors are generally coarse and rangefinders are generally expensive. Passive sensors such as video cameras are low cost and feature-rich, but suffer from high dimensions and excessive bandwidth. This paper presents a novel approach to indoor localization using a low cost video camera and spherical mirror. Omnidirectional captured images undergo normalization and unwarping to a canonical representation more suitable for processing. Training images along with indoor maps are fed into a semi-supervised linear extension of graph embedding manifold learning algorithm to learn a low dimensional surface which represents the interior of a building. The manifold surface descriptor is used as a semantic signature for particle filter localization. Test frames are conditioned, mapped to a low dimensional surface, and then localized via an adaptive particle filter algorithm. These particles are temporally filtered for the final localization estimate. The proposed method, termed omnivision-based manifold particle filters, reduces convergence lag and increases overall efficiency.

Wong, Adelia; Yousefhussien, Mohammed; Ptucha, Raymond

2015-01-01

94

TRACKING INTERMITTENT TREMOR FREQUENCY WITH A PARTICLE FILTER Sunghan Kim

TRACKING INTERMITTENT TREMOR FREQUENCY WITH A PARTICLE FILTER Sunghan Kim and James Mc) is a suitable method to track tremor frequencies embedded in spike trains, whose firing rate can be modeled as a sinusoid contaminated with noise. However, when tremor is intermittent, the EKF frequency tracker takes

95

EFFECTS OF PARAMETERS VARIATIONS IN PARTICLE FILTER TRACKING Xavier Desurmonta

of standard evaluation process has prevented fair comparison between them. In this paper, we simply propose, marketing and also semantic content retrieval, is increasing, [1] [2]. One basic sub as follows: Section 2 introduces the top-down tracking and specifically the particle filter tracking. Section

Dupont, Stphane

96

Change Detection for Nonlinear Systems; A Particle Filtering Approach

Change Detection for Nonlinear Systems; A Particle Filtering Approach B. Azimi-Sadjadi and P of Maryland at College Park College Park, MD 20742 Abstract In this paper we present a change detection method contains the parameters that change slowly with respect to time, for example the parameters that describe

Del Moral , Pierre

97

A Tutorial on Particle Filtering and Smoothing: Fifteen years later

A Tutorial on Particle Filtering and Smoothing: Fifteen years later Arnaud Doucet The Institute vision, econometrics, robotics and navigation. The objective of this tutorial is to provide a complete described by this tutorial are a broad and popular class of Monte Carlo algorithms which have been developed

Johansen, Adam

98

Model Adaptation for Prognostics in a Particle Filtering Framework

NASA Technical Reports Server (NTRS)

One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.

Saha, Bhaskar; Goebel, Kai Frank

2011-01-01

99

Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks

Particle filters (PFs) are powerful sampling- based inference\\/learning algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a principled way, any type of probabil- ity distribution, nonlinearity and non-stationarity. They have appeared in several fields under such names as \\

Arnaud Doucet; Nando De Freitas; Kevin P. Murphy; Stuart J. Russell

2000-01-01

100

Random set particle filter for bearings-only multitarget tracking

NASA Astrophysics Data System (ADS)

The random set approach to multitarget tracking is a theoretically sound framework that covers joint estimation of the number of targets and the state of the targets. This paper describes a particle filter implementation of the random set multitarget filter. The contribution of this paper to the random set tracking framework is the formulation of a measurement model where each sensor report is assumed to contain at most one measurement. The implemented filter was tested in synthetic bearings-only tracking scenarios containing up to two targets in the presence of false alarms and missed measurements. The estimated target state consisted of 2D position and velocity components. The filter was capable to track the targets fairly well despite of the missing measurements and the relatively high false alarm rates. In addition, the filter showed robustness against wrong parameter values of false alarm rates. The results that were obtained during the limited tests of the filter show that the random set framework has potential for challenging tracking situations. On the other hand, the computational burden of the described implementation is quite high and increases approximately linearly with respect to the expected number of targets.

Vihola, Matti

2005-05-01

101

A Kalman-Particle Kernel Filter and its Application to Terrain Navigation

. Keywords: Kalman filter, kernel density estimator, regularized particle filter, Inertial navigation SystemA Kalman-Particle Kernel Filter and its Application to Terrain Navigation Dinh-Tuan Pham causes undesirable Monte Carlo fluctuations. This new filter is applied to terrain navigation, which

Del Moral , Pierre

102

Cubature Gaussian Particle Filter for Initial Alignment of Strapdown Inertial Navigation System

The error model of the initial alignment of the marine strap down inertial navigation system on the swaying base is nonlinear, while the azimuth angle error is large. For this nonlinear model, a new nonlinear filter called as the cubature Gaussian Particle filter is proposed, which is based on the cubature Kalman filter and the Gaussian Particle filter. The cubature

Weisheng Wu; Chunlei Song; Junhou Wang; Zhenzhen Long

2010-01-01

103

AN ADAPTIVE PROJECTION ALGORITHM FOR MULTIRATE FILTER BANK OPTIMIZATION

to the nonquadratic nature of the cost function to be minimized, and accordingly non gradient algorithms may offer to the global minimum of the cost function, while at the same time avoiding potential local minima due to itsAN ADAPTIVE PROJECTION ALGORITHM FOR MULTIRATE FILTER BANK OPTIMIZATION Dong-Yan Huang and Phillip

Regalia, Phillip A.

104

Na-Faraday rotation filtering: The optimal point

Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

Kiefer, Wilhelm; Lw, Robert; Wrachtrup, Jrg; Gerhardt, Ilja

2014-01-01

105

Na-Faraday rotation filtering: The optimal point

NASA Astrophysics Data System (ADS)

Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing.

Kiefer, Wilhelm; Lw, Robert; Wrachtrup, Jrg; Gerhardt, Ilja

2014-10-01

106

Distributed soft-data-constrained multi-model particle filter.

A distributed nonlinear estimation method based on soft-data-constrained multimodel particle filtering and applicable to a number of distributed state estimation problems is proposed. This method needs only local data exchange among neighboring sensor nodes and thus provides enhanced reliability, scalability, and ease of deployment. To make the multimodel particle filtering work in a distributed manner, a Gaussian approximation of the particle cloud obtained at each sensor node and a consensus propagation-based distributed data aggregation scheme are used to dynamically reweight the particles' weights. The proposed method can recover from failure situations and is robust to noise, since it keeps the same population of particles and uses the aggregated global Gaussian to infer constraints. The constraints are enforced by adjusting particles' weights and assigning a higher mass to those closer to the global estimate represented by the nodes in the entire sensor network after each communication step. Each sensor node experiences gradual change; i.e., if a noise occurs in the system, the node, its neighbors, and consequently the overall network are less affected than with other approaches, and thus recover faster. The efficiency of the proposed method is verified through extensive simulations for a target tracking system which can process both soft and hard data in sensor networks. PMID:24956539

Seifzadeh, Sepideh; Khaleghi, Bahador; Karray, Fakhri

2015-03-01

107

NASA Astrophysics Data System (ADS)

In this study, genetic resampling (GRS) approach is utilized for precise orbit determination (POD) using the batch filter based on particle filtering (PF). Two genetic operations, which are arithmetic crossover and residual mutation, are used for GRS of the batch filter based on PF (PF batch filter). For POD, Laser-ranging Precise Orbit Determination System (LPODS) and satellite laser ranging (SLR) observations of the CHAMP satellite are used. Monte Carlo trials for POD are performed by one hundred times. The characteristics of the POD results by PF batch filter with GRS are compared with those of a PF batch filter with minimum residual resampling (MRRS). The post-fit residual, 3D error by external orbit comparison, and POD repeatability are analyzed for orbit quality assessments. The POD results are externally checked by NASA JPLs orbits using totally different software, measurements, and techniques. For post-fit residuals and 3D errors, both MRRS and GRS give accurate estimation results whose mean root mean square (RMS) values are at a level of 5 cm and 10-13 cm, respectively. The mean radial orbit errors of both methods are at a level of 5 cm. For POD repeatability represented as the standard deviations of post-fit residuals and 3D errors by repetitive PODs, however, GRS yields 25% and 13% more robust estimation results than MRRS for post-fit residual and 3D error, respectively. This study shows that PF batch filter with GRS approach using genetic operations is superior to PF batch filter with MRRS in terms of robustness in POD with SLR observations.

Kim, Young-Rok; Park, Eunseo; Choi, Eun-Jung; Park, Sang-Young; Park, Chandeok; Lim, Hyung-Chul

2014-09-01

108

Measurement of particle sulfate from micro-aethalometer filters

NASA Astrophysics Data System (ADS)

The micro-aethalometer (AE51) was designed for high time resolution black carbon (BC) measurements and the process collects particles on a filter inside the instrument. Here we examine the potential for saving these filters for subsequent sulfate (SO42-) measurement. For this purpose, a series lab and field blanks were analyzed to characterize blank levels and variability and then collocated 24-h aerosol sampling was conducted in Beijing with the AE51 and a dual-channel filterpack sampler that collects fine particles (PM2.5). AE51 filters and the filters from the filterpacks sampled for 24 h were extracted with ultrapure water and then analyzed by Ion Chromatography (IC) to determine integrated SO42- concentration. Blank corrections were essential and the estimated detection limit for 24 h AE51 sampling of SO42- was estimated to be 1.4 ?g/m3. The SO42- measured from the AE51 based upon blank corrections using batch-average field blank SO42- values was found to be in reasonable agreement with the filterpack results (R2 > 0.87, slope = 1.02) indicating that it is possible to determine both BC and SO42- concentrations using the AE51 in Beijing. This result suggests that future comparison of the relative health impacts of BC and SO42- could be possible when the AE51 is used for personal exposure measurement.

Wang, Qingqing; Yang, Fumo; Wei, Lianfang; Zheng, Guangjie; Fan, Zhongjie; Rajagopalan, Sanjay; Brook, Robert D.; Duan, Fengkui; He, Kebin; Sun, Yele; Brook, Jeffrey R.

2014-10-01

109

Degeneracy, frequency response and filtering in IMRT optimization

NASA Astrophysics Data System (ADS)

This paper attempts to provide an answer to some questions that remain either poorly understood, or not well documented in the literature, on basic issues related to intensity modulated radiation therapy (IMRT). The questions examined are: the relationship between degeneracy and frequency response of optimizations, effects of initial beamlet fluence assignment and stopping point, what does filtering of an optimized beamlet map actually do and how could image analysis help to obtain better optimizations? Two target functions are studied, a quadratic cost function and the log likelihood function of the dynamically penalized likelihood (DPL) algorithm. The algorithms used are the conjugate gradient, the stochastic adaptive simulated annealing and the DPL. One simple phantom is used to show the development of the analysis tools used and two clinical cases of medium and large dose matrix size (a meningioma and a prostate) are studied in detail. The conclusions reached are that the high number of iterations that is needed to avoid degeneracy is not warranted in clinical practice, as the quality of the optimizations, as judged by the DVHs and dose distributions obtained, does not improve significantly after a certain point. It is also shown that the optimum initial beamlet fluence assignment for analytical iterative algorithms is a uniform distribution, but such an assignment does not help a stochastic method of optimization. Stopping points for the studied algorithms are discussed and the deterioration of DVH characteristics with filtering is shown to be partially recoverable by the use of space-variant filtering techniques.

Llacer, Jorge; Agazaryan, Nzhde; Solberg, Timothy D.; Promberger, Claus

2004-07-01

110

Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters

1 Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters Giorgio Grisetti, 8092 Zurich, Switzerland Abstract-- Recently, Rao-Blackwellized particle filters have been introduced this number in a Rao-Blackwellized particle filter for learning grid maps. We propose an approach to compute

Grisetti, Giorgio

111

Rao-Blackwellised Particle Filtering for Fault Diagnosis Nando de Freitas

Rao-Blackwellised Particle Filtering for Fault Diagnosis Nando de Freitas Department of Computer conditionally Gaussian state space models and an efficient Monte Carlo method known as Rao-Blackwellised parti MODEL AND INFERENCE OBJECTIVES 3 PARTICLE FILTERING 4 RAO-BLACKWELLISED PARTICLE FILTERING 5 EXPERIMENTS

de Freitas, Nando

112

NASA Astrophysics Data System (ADS)

This dissertation describes the application of nonlinear estimation algorithms to the Global Positioning System (GPS) code tracking problem in noisy environments when signal-to-noise ratio (SNR) is low. This problem is currently of great interest due to the development of real-time software defined receivers (SDRs). In order to improve GPS signal tracking performance and position accuracy, maximum power must he recovered from the signal correlators. This is accomplished by optimal alignment of the receiver generated code replicas with the true incoming signal. The optimal recursive Bayesian phase estimator is formulated to solve this problem for a single line-of-sight tracking loop. In low SNR environments, the nonlinear code phase error probability densities become non-Gaussian, asymmetric, and often multi-modal. A closed-form solution to the integrals in the Bayesian estimator is intractable and numerical approximations to the Bayesian estimator are required. In particular, a grid-based Bayesian estimator and a Sampling Importance Resampling (SIR) particle filter are applied to the line-of-sight tracking problem. Finally, a Particle Filter Vector Delay Lock Loop (PF-VDLL) architecture is described that optimally combines measurements from all satellites simultaneously to compute the three-dimensional navigation solution. The results of the PF-VDLL are shown to improve perfromance over an Extended Kalman Filter (EKF) implementation. A novel approach for integrating measurements from an Inertial Measurement Unit (IMU) with the PF-VDLL to improve tracking performance is shown.

Fay, Gary Lindsay, II

113

FIR filter optimization for video processing on FPGAs

NASA Astrophysics Data System (ADS)

Two-dimensional finite impulse response (FIR) filters are an important component in many image and video processing systems. The processing of complex video applications in real time requires high computational power, which can be provided using field programmable gate arrays (FPGAs) due to their inherent parallelism. The most resource-intensive components in computing FIR filters are the multiplications of the folding operation. This work proposes two optimization techniques for high-speed implementations of the required multiplications with the least possible number of FPGA components. Both methods use integer linear programming formulations which can be optimally solved by standard solvers. In the first method, a formulation for the pipelined multiple constant multiplication problem is presented. In the second method, also multiplication structures based on look-up tables are taken into account. Due to the low coefficient word size in video processing filters of typically 8 to 12 bits, an optimal solution is found for most of the filters in the benchmark used. A complexity reduction of 8.5% for a Xilinx Virtex 6 FPGA could be achieved compared to state-of-the-art heuristics.

Kumm, Martin; Fanghnel, Diana; Mller, Konrad; Zipf, Peter; Meyer-Baese, Uwe

2013-12-01

114

Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

Boudet, Samuel; Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

2014-01-01

115

Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30?Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30?Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

2014-01-01

116

Evolutionary Optimization Versus Particle Swarm Optimization: Philosophy and Performance Differences

This paper investigates the philosophical and performance differences of particle swarm and evolutionary optimization. The method of processing employed in each technique are first reviewed followed by a summary of their philosophical differences. Comparison experiments involving four non-linear functions well studied in the evolutionary optimization literature are used to highlight some performance differences between the techniques.

Peter J. Angeline

1998-01-01

117

Optimization of multiplierless two-dimensional digital filters

NASA Astrophysics Data System (ADS)

Circularly symmetric and diamond-shaped low-pass linear phase FIR filters are designed using coefficients comprising the sum or difference of two signed power-of-two (SPT) terms. A minimax error criterion is adopted in conjunction with an optimization process based on the use of genetic algorithms (GAs). The results presented are compared with those obtained using various other design methods, including simulated annealing, linear programming and simple rounding of an optimum (continuous) minimax solution. The filters designed using GAs exhibit superior performance to those designed using other methods.

Sriranganathan, S.; Bull, David R.; Redmill, David W.

1996-02-01

118

Hydro thermal scheduling using particle swarm optimization

This paper presents a new approach of particle swarm optimization (PSO) algorithm for short term hydro thermal scheduling (HTS) problems. Various possible particle selections have been studied and its effects on the global optima have been discussed. The effectiveness and stochastic nature of proposed algorithm has been tested with standard test case and the results have been compared with earlier

Chandrasekar Samudi; Gautham P. Das; Piyush C. Ojha; T. S. Sreeni; S. Cherian

2008-01-01

119

Particle swarm optimization: developments, applications and resources

This paper focuses on the engineering and computer science aspects of developments, applications, and resources related to particle swarm optimization. Developments in the particle swarm algorithm since its origin in 1995 are reviewed. Included are brief discussions of constriction factors, inertia weights, and tracking dynamic systems. Applications, both those already developed, and promising future application areas, are reviewed. Finally, resources

Russell C. Eberhart; Yuhui Shi

2001-01-01

120

Using selection to improve particle swarm optimization

This paper describes a evolutionary optimization algorithm that is a hybrid based on the particle swarm algorithm but with the addition of a standard selection mechanism from evolutionary computations. A comparison is performed between the hybrid swarm and the ordinary particle swarm that shows selection to provide an advantage for some (but not all) complex functions

Peter J. Angeline

1998-01-01

121

Microscopical examination of particles on smoked cigarette filters.

Cigarette butts collected from crime scenes can play an important role in forensic investigations by providing a DNA link to a victim or suspect. Microscopic particles can frequently be seen on smoked cigarette filters with stereomicroscopy. The authors are not aware of previous published attempts to identify this material. These particles were examined with transmission and scanning electron microscopy and were found to consist of two types of superficial epithelial tissue, consistent with two areas of the lip surface. The particles were often composed of several layers of non-nucleated and nucleated epithelium with the former being the most common. It was further determined that both of these cell types are easily transferred from the lip. The results of this study indicate that the most visible source of DNA obtained from cigarette butts and other objects in contact with the lip may be lip epithelial tissue. PMID:19291443

Linch, Charles A; Prahlow, Joseph A

2008-01-01

122

Cultural-based multiobjective particle swarm optimization.

Multiobjective particle swarm optimization (MOPSO) algorithms have been widely used to solve multiobjective optimization problems. Most MOPSOs use fixed momentum and acceleration for all particles throughout the evolutionary process. In this paper, we introduce a cultural framework to adapt the personalized flight parameters of the mutated particles in a MOPSO, namely momentum and personal and global accelerations, for each individual particle based upon various types of knowledge in "belief space," specifically situational, normative, and topographical knowledge. A comprehensive comparison of the proposed algorithm with chosen state-of-the-art MOPSOs on benchmark test functions shows that the movement of the individual particle using the adapted parameters assists the MOPSO to perform efficiently and effectively in exploring solutions close to the true Pareto front while exploiting a local search to attain diverse solutions. PMID:20837447

Daneshyari, Moayed; Yen, Gary G

2011-04-01

123

Selectively-informed particle swarm optimization

Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315

Gao, Yang; Du, Wenbo; Yan, Gang

2015-01-01

124

Selectively-informed particle swarm optimization.

Particle swarm optimization (PSO) is a nature-inspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectively-informed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a densely-connected hub particle gets full information from all of its neighbors while a non-hub particle with few connections can only follow a single yet best-performed neighbor. Extensive numerical experiments on widely-used benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the non-hub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors. PMID:25787315

Gao, Yang; Du, Wenbo; Yan, Gang

2015-01-01

125

NASA Astrophysics Data System (ADS)

A general sequential Monte Carlo method, particularly a general particle filter, attracts much attention in prognostics recently because it is able to on-line estimate posterior probability density functions of the state functions used in a state space model without making restrictive assumptions. In this paper, the general particle filter is introduced to optimize a wavelet filter for extracting bearing fault features. The major innovation of this paper is that a joint posterior probability density function of wavelet parameters is represented by a set of random particles with their associated weights, which is seldom reported. Once the joint posterior probability density function of wavelet parameters is derived, the approximately optimal center frequency and bandwidth can be determined and be used to perform an optimal wavelet filtering for extracting bearing fault features. Two case studies are investigated to illustrate the effectiveness of the proposed method. The results show that the proposed method provides a Bayesian approach to extract bearing fault features. Additionally, the proposed method can be generalized by using different wavelet functions and metrics and be applied more widely to any other situation in which the optimal wavelet filtering is required.

Wang, Dong; Sun, Shilong; Tse, Peter W.

2015-02-01

126

Ridge filter design for a particle therapy line

NASA Astrophysics Data System (ADS)

The beam irradiation system for particle therapy can use a passive or an active beam irradiation method. In the case of an active beam irradiation, using a ridge filter would be appropriate to generate a spread-out Bragg peak (SOBP) through a large scanning area. For this study, a ridge filter was designed as an energy modulation device for a prototype active scanning system at MC-50 in Korea Institute of Radiological And Medical Science (KIRAMS). The ridge filter was designed to create a 10 mm of SOBP for a 45-MeV proton beam. To reduce the distal penumbra and the initial dose, [DM] determined the weighting factor for Bragg Peak by applying an in-house iteration code and the Minuit Fit package of Root. A single ridge bar shape and its corresponding thickness were obtained through 21 weighting factors. Also, a ridge filter was fabricated to cover a large scanning area (300 300 mm2) by Polymethyl Methacrylate (PMMA). The fabricated ridge filter was tested at the prototype active beamline of MC-50. The SOBP and the incident beam distribution were obtained by using HD-810 GaF chromatic film placed at a right triangle to the PMMA block. The depth dose profile for the SOBP can be obtained precisely by using the flat field correction and measuring the 2-dimensional distribution of the incoming beam. After the flat field correction is used, the experimental results show that the SOBP region matches with design requirement well, with 0.62% uniformity.

Kim, Chang Hyeuk; Han, Garam; Lee, Hwa-Ryun; Kim, Hyunyong; Jang, Hong Suk; Kim, Jeong Hwan; Park, Dong Wook; Jang, Sea Duk; Hwang, Won Taek; Kim, Geun-Beom; Yang, Tae-Keun

2014-05-01

127

Particle swarm optimization with composite particles in dynamic environments.

In recent years, there has been a growing interest in the study of particle swarm optimization (PSO) in dynamic environments. This paper presents a new PSO model, called PSO with composite particles (PSO-CP), to address dynamic optimization problems. PSO-CP partitions the swarm into a set of composite particles based on their similarity using a "worst first" principle. Inspired by the composite particle phenomenon in physics, the elementary members in each composite particle interact via a velocity-anisotropic reflection scheme to integrate valuable information for effectively and rapidly finding the promising optima in the search space. Each composite particle maintains the diversity by a scattering operator. In addition, an integral movement strategy is introduced to promote the swarm diversity. Experiments on a typical dynamic test benchmark problem provide a guideline for setting the involved parameters and show that PSO-CP is efficient in comparison with several state-of-the-art PSO algorithms for dynamic optimization problems. PMID:20371407

Liu, Lili; Yang, Shengxiang; Wang, Dingwei

2010-12-01

128

Independent motion detection with a rival penalized adaptive particle filter

NASA Astrophysics Data System (ADS)

Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.

Becker, Stefan; Hbner, Wolfgang; Arens, Michael

2014-10-01

129

PARTICLE FILTERS FOR SYSTEM IDENTIFICATION OF STATE-SPACE MODELS LINEAR IN EITHER PARAMETERS

t and states zt. As a very coarse rule of thumb, do not try to use the particle filter for more than five widens the scope of the particle filter to more complex systems, and secondly decreases the variance in the linear parameters/states for fixed filter complexity. This second property is illustrated on an example

Schön, Thomas

130

The new approach for infrared target tracking based on the particle filter algorithm

NASA Astrophysics Data System (ADS)

Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.

Sun, Hang; Han, Hong-xia

2011-08-01

131

Enhanced Particle Capture in Slow Sand Filters using a Filter Aid O b j e c t i v e s

pump was connected at the bottom end of the filter to regulate the filter approach velocity, which valves Filter Pinch valve Peristaltic pump Pressure sensor #12;Enhanced Particle Capture in Slow Sand

132

Loss of Fine Particle Ammonium from Denuded Nylon Filters

Ammonium is an important constituent of fine particulate mass in the atmosphere, but can be difficult to quantify due to possible sampling artifacts. Losses of semivolatile species such as NH4NO3 can be particularly problematic. In order to evaluate ammonium losses from aerosol particles collected on filters, a series of field experiments was conducted using denuded nylon and Teflon filters at Bondville, Illinois (February 2003), San Gorgonio, California (April 2003 and July 2004), Grand Canyon National Park, Arizona (May, 2003), Brigantine, New Jersey (November 2003), and Great Smoky Mountains National Park (NP), Tennessee (JulyAugust 2004). Samples were collected over 24-hr periods. Losses from denuded nylon filters ranged from 10% (monthly average) in Bondville, Illinois to 28% in San Gorgonio, California in summer. Losses on individual sample days ranged from 1% to 65%. Losses tended to increase with increasing diurnal temperature and relative humidity changes and with the fraction of ambient total N(--III) (particulate NH4+ plus gaseous NH3) present as gaseous NH3. The amount of ammonium lost at most sites could be explained by the amount of NH4NO3 present in the sampled aerosol. Ammonium losses at Great Smoky Mountains NP, however, significantly exceeded the amount of NH4NO3 collected. Ammoniated organic salts are suggested as additional important contributors to observed ammonium loss at this location.

Yu, Xiao-Ying; Lee, Taehyoung; Ayres, Benjamin; Kreidenweis, Sonia M.; Malm, William C.; Collett, Jeffrey L.

2006-08-01

133

Nonlinear EEG Decoding Based on a Particle Filter Model

While the world is stepping into the aging society, rehabilitation robots play a more and more important role in terms of both rehabilitation treatment and nursing of the patients with neurological diseases. Benefiting from the abundant contents of movement information, electroencephalography (EEG) has become a promising information source for rehabilitation robots control. Although the multiple linear regression model was used as the decoding model of EEG signals in some researches, it has been considered that it cannot reflect the nonlinear components of EEG signals. In order to overcome this shortcoming, we propose a nonlinear decoding model, the particle filter model. Two- and three-dimensional decoding experiments were performed to test the validity of this model. In decoding accuracy, the results are comparable to those of the multiple linear regression model and previous EEG studies. In addition, the particle filter model uses less training data and more frequency information than the multiple linear regression model, which shows the potential of nonlinear decoding models. Overall, the findings hold promise for the furtherance of EEG-based rehabilitation robots. PMID:24949420

Hong, Jun

2014-01-01

134

Symmetric Phase-Only Filtering in Particle-Image Velocimetry

NASA Technical Reports Server (NTRS)

Symmetrical phase-only filtering (SPOF) can be exploited to obtain substantial improvements in the results of data processing in particle-image velocimetry (PIV). In comparison with traditional PIV data processing, SPOF PIV data processing yields narrower and larger amplitude correlation peaks, thereby providing more-accurate velocity estimates. The higher signal-to-noise ratios associated with the higher amplitude correlation peaks afford greater robustness and reliability of processing. SPOF also affords superior performance in the presence of surface flare light and/or background light. SPOF algorithms can readily be incorporated into pre-existing algorithms used to process digitized image data in PIV, without significantly increasing processing times. A summary of PIV and traditional PIV data processing is prerequisite to a meaningful description of SPOF PIV processing. In PIV, a pulsed laser is used to illuminate a substantially planar region of a flowing fluid in which particles are entrained. An electronic camera records digital images of the particles at two instants of time. The components of velocity of the fluid in the illuminated plane can be obtained by determining the displacements of particles between the two illumination pulses. The objective in PIV data processing is to compute the particle displacements from the digital image data. In traditional PIV data processing, to which the present innovation applies, the two images are divided into a grid of subregions and the displacements determined from cross-correlations between the corresponding sub-regions in the first and second images. The cross-correlation process begins with the calculation of the Fourier transforms (or fast Fourier transforms) of the subregion portions of the images. The Fourier transforms from the corresponding subregions are multiplied, and this product is inverse Fourier transformed, yielding the cross-correlation intensity distribution. The average displacement of the particles across a subregion results in a displacement of the correlation peak from the center of the correlation plane. The velocity is then computed from the displacement of the correlation peak and the time between the recording of the two images. The process as described thus far is performed for all the subregions. The resulting set of velocities in grid cells amounts to a velocity vector map of the flow field recorded on the image plane. In traditional PIV processing, surface flare light and bright background light give rise to a large, broad correlation peak, at the center of the correlation plane, that can overwhelm the true particle- displacement correlation peak. This has made it necessary to resort to tedious image-masking and background-subtraction procedures to recover the relatively small amplitude particle-displacement correlation peak. SPOF is a variant of phase-only filtering (POF), which, in turn, is a variant of matched spatial filtering (MSF). In MSF, one projects a first image (denoted the input image) onto a second image (denoted the filter) as part of a computation to determine how much and what part of the filter is present in the input image. MSF is equivalent to cross-correlation. In POF, the frequency-domain content of the MSF filter is modified to produce a unitamplitude (phase-only) object. POF is implemented by normalizing the Fourier transform of the filter by its magnitude. The advantage of POFs is that they yield correlation peaks that are sharper and have higher signal-to-noise ratios than those obtained through traditional MSF. In the SPOF, these benefits of POF can be extended to PIV data processing. The SPOF yields even better performance than the POF approach, which is uniquely applicable to PIV type image data. In SPOF as now applied to PIV data processing, a subregion of the first image is treated as the input image and the corresponding subregion of the second image is treated as the filter. The Fourier transforms from both the firs and second- image subregions are normalized by the square roots of their respective magnitudes.

Wemet, Mark P.

2008-01-01

135

Distributed optimal fusion prior filter for systems with multiple packet dropouts

This paper is concerned with the optimal prior filtering problem for linear discrete-time stochastic systems with multiple packet dropouts and correlated noises. Firstly, based on a recent packet dropout model, a new unbiased optimal prior filter is developed in the linear minimum variance sense for a single sensor system. The prior filter is reduced to the standard Kalman one-step predictor

Ma Jing; Sun Shuli

2010-01-01

136

Particle Swarms for Dynamic Optimization Problems

Particle Swarms for Dynamic Optimization Problems Tim Blackwell1 , J¨urgen Branke2 , and Xiaodong Li3 1 Department of Computing Goldsmiths College, London, UK t.blackwell@gold.ac.uk 2 Institute AIFB- ious authors [9, 7, 14, 17, 19, 20, 21, 32, 29, 38]. The overall consequence of #12;194 T. Blackwell, J

Li, Xiaodong

137

Particle swarm optimization for task assignment problem

Task assignment is one of the core steps to effectively exploit the capabilities of distributed or parallel computing systems. The task assignment problem is an NP-complete problem. In this paper, we present a new task assignment algorithm that is based on the principles of particle swarm optimization (PSO). PSO follows a collaborative population-based search, which models over the social behavior

Ayed Salman; Imtiaz Ahmad; Sabah Al-madani

2002-01-01

138

Optimal subband Kalman filter for normal and oesophageal speech enhancement.

This paper presents the single channel speech enhancement system using subband Kalman filtering by estimating optimal Autoregressive (AR) coefficients and variance for speech and noise, using Weighted Linear Prediction (WLP) and Noise Weighting Function (NWF). The system is applied for normal and Oesophageal speech signals. The method is evaluated by Perceptual Evaluation of Speech Quality (PESQ) score and Signal to Noise Ratio (SNR) improvement for normal speech and Harmonic to Noise Ratio (HNR) for Oesophageal Speech (OES). Compared with previous systems, the normal speech indicates 30% increase in PESQ score, 4 dB SNR improvement and OES shows 3 dB HNR improvement. PMID:25227070

Ishaq, Rizwan; Garca Zapirain, Begoa

2014-01-01

139

Integration of optimized low-pass filters in band-pass filters for out-of-band improvement

We propose an original structure for the design of high performance filters with simultaneously controlled band-pass and band-reject responses. The band-reject response is controlled due to the integration of low-pass structure. Thus, the spurious resonances of the band-pass filter are rejected up to the low-pass filter ones. In this way, we have to optimize the response of the low-pass structure

CCdric QUENDO; C. Person; E. Rius; M. Ney

2001-01-01

140

Multi-strategy coevolving aging particle optimization.

We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer. PMID:24344695

Iacca, Giovanni; Caraffini, Fabio; Neri, Ferrante

2014-02-01

141

Quantum demolition filtering and optimal control of unstable systems.

A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one. PMID:23091216

Belavkin, V P

2012-11-28

142

Unit Commitment by Adaptive Particle Swarm Optimization

NASA Astrophysics Data System (ADS)

This paper presents an Adaptive Particle Swarm Optimization (APSO) for Unit Commitment (UC) problem. APSO reliably and accurately tracks a continuously changing solution. By analyzing the social model of standard PSO for the UC problem of variable size and load demand, adaptive criteria are applied on PSO parameters and the global best particle (knowledge) based on the diversity of fitness. In this proposed method, PSO parameters are automatically adjusted using Gaussian modification. To increase the knowledge, the global best particle is updated instead of a fixed one in each generation. To avoid the method to be frozen, idle particles are reset. The real velocity is digitized (0/1) by a logistic function for binary UC. Finally, the benchmark data and methods are used to show the effectiveness of the proposed method.

Saber, Ahmed Yousuf; Senjyu, Tomonobu; Miyagi, Tsukasa; Urasaki, Naomitsu; Funabashi, Toshihisa

143

Cardiac-phase filtering in intracardiac particle image velocimetry

NASA Astrophysics Data System (ADS)

The ability to accurately measure velocity within the embryonic zebrafish heart, at high spatial and temporal resolution, enables further insight into the effects of hemodynamics on heart development. Unfortunately, currently available techniques are unable to provide the required resolution, both spatial and temporal, for detailed analysis. Advances in imaging hardware are allowing bright field imaging combined with particle image velocimetry to become a viable technique for the broader community at the required spatial and temporal resolutions. While bright field imaging offers the necessary temporal resolution, this approach introduces heart wall artifacts that interfere with accurate velocity measurement. This study presents a technique for cardiac-phase filtering of bright field images to remove the heart wall and improve velocimetry measurements. Velocity measurements were acquired for zebrafish embryos ranging from 3 to 6 days postfertilization. Removal of the heart wall was seen to correct a severe (3-fold) underestimation in velocity measurements obtained without filtering. Additionally, velocimetry measurements were used to quantitatively detect developmental changes in cardiac performance in vivo, investigating both changes in contractile period and maximum velocities present through the ventricular-bulbar valve.

Jamison, R. Aidan; Fouras, Andreas; Bryson-Richardson, Robert J.

2012-03-01

144

Object tracking with particle filter in UAV video

NASA Astrophysics Data System (ADS)

Aerial surveillance is a main functionality of UAV, which is realized via video camera. During the operations, the mission assigned targets always are the kinetic objects, such as people or vehicles. Therefore, object tracking is taken as the key techniques for UAV sensor payload. Two difficulties for UAV object tracking are dynamic background and hardly predicting target's motion. To solve the problems, it employed the particle filter in the research. Modeling the target by its characteristics, for instance, color features, it approximates the possibility density of target state with weighting sample sets, and the state vector contains position, motion vector and region parameters. The experiments demonstrate the effectiveness and robustness of the proposed method in UAV video tracking.

Yu, Wenshuai; Yin, Xiaodong; Chen, Bing; Xie, Jinhua

2013-10-01

145

Diesel offers higher fuel efficiency, but produces higher exhaust particulate matter. Diesel particulate filters are presently the most efficient means to reduce these emissions. These filters typically trap particles in two basic modes: at the beginning of the exposure cycle the particles are captured in the filter holes, and at longer times the particles form a "cake" on which particles are trapped. Eventually the "cake" removed by oxidation and the cycle is repeated. We have investigated the properties and behavior of two commonly used filters: silicon carbide (SiC) and cordierite (DuraTrap RC) by exposing them to nearly-spherical ammonium sulfate particles. We show that the transition from deep bed filtration to "cake" filtration can easily be identified by recording the change in pressure across the filters as a function of exposure. We investigated performance of these filters as a function of flow rate and particle size. The filters trap small and large particles more efficiently than particles that are ~80 to 200 nm in aerodynamic diameter. A comparison between the experimental data and a simulation using incompressible lattice-Boltzmann model shows very good qualitative agreement, but the model overpredicts the filters trapping efficiency.

Yang, Juan; Stewart, Marc; Maupin, Gary D.; Herling, Darrell R.; Zelenyuk, Alla

2009-04-15

146

It is challenging to measure the finger's kinematics of underlying bones in vivo. This paper presents a new method of finger kinematics measurement, using a geometric finger model and several markers deliberately stuck on skin surface. Using a multiple-view camera system, the optimal motion parameters of finger model were estimated using the proposed mixture-prior particle filtering. This prior, consisting of model and marker information, avoids generating improper particles for achieving near real-time performance. This method was validated using a planar fluoroscopy system that worked simultaneously with photographic system. Ten male subjects with asymptomatic hands were investigated in experiments. The results showed that the kinematic parameters could be estimated more accurately by the proposed method than by using only markers. There was 20-40% reduction in skin artefacts achieved for finger flexion/extension. Thus, this profile system can be developed as a tool of reliable kinematics measurement with good applicability for hand rehabilitation. PMID:22225500

Chang, Cheung-Wen; Kuo, Li-Chieh; Jou, I-Ming; Su, Fong-Chin; Sun, Yung-Nien

2013-01-01

147

Force field particle filter, combining ultrasound standing waves and laminar flow

A continuous flow microparticle filter that combines megahertz frequency ultrasonic standing waves and laminar flow is described. The filter has a 0.25mm, single half wavelength, acoustic pathlength at right angles to the flow. Standing wave radiation pressure on suspended particles drives them towards the centre of the acoustic pathlength. Clarified suspending phase from the region closest to the filter wall

Jeremy J Hawkes; W. Terence Coakley

2001-01-01

148

In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (?p), stopband error (?s), transition band error (?t), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647

Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

2015-01-01

149

Optimization of a particle optical system in a mutilprocessor environment

In the design of a charged particle optical system, many geometrical and electric parameters have to be optimized to improve the performance characteristics. In every optimization cycle, the electromagnetic field and particle trajectories have to be calculated. Therefore, the optimization of a charged particle optical system is limited by the computer resources seriously. Apart from this, numerical errors of calculation

Lei Wei; Yin Hanchun; Wang Baoping; Tong Linsu

2002-01-01

150

Mobile Robot Navigation Using Particle Swarm Optimization and Adaptive NN

Mobile Robot Navigation Using Particle Swarm Optimization and Adaptive NN Yangmin Li and Xin Chen presents a novel design for mobile robot using particle swarm optimization (PSO) and adaptive NN control Navigation Using Particle Swarm Optimization 629 2.2 Control Law for Individual Robot Let pd denote

Li, Yangmin

151

Particle Swarm Optimization for Image Deblurring

NASA Astrophysics Data System (ADS)

Within the framework of this first study we suggest the use of the Particle Swarm Optimization technique (PSO) in the image restoration field. In our knowledge, we did not still find works concerning the image deblurring (restoration) using the PSO. So, in this paper, we present the use of the PSO in two manners: (i) by taking as hypothesis the degraded image as the entire swarm and pixels as particles; (ii) the degraded image is taken as particle and we generate a population of images to buildup a swarm of variable size. The first results which we give were validated on real images degraded by a Gaussian blur only, and degraded by a Gaussian blur and an additive Gaussian noise. We finish with the comparison of our results with some classical restoration methods.

Toumi, A.; Taleb-Ahmed, A.; Benmahammed, K.; Rechid, N.

2008-06-01

152

Two novel distributed particle filters with Gaussian mixer approximation are proposed to localize and track multiple moving targets in a wireless sensor network. The distributed particle filters run on a set of uncorrelated sensor cliques that are dynamically organized based on moving target trajectories. These two algorithms differ in how the distributive computing is performed. In the first algorithm, partial

Xiaohong Sheng; Yu Hen Hu; Parameswaran Ramanathan

2005-01-01

153

Laboratory of Computational Engineering Rao-Blackwellized Particle Filter for Tracking

Laboratory of Computational Engineering Rao-Blackwellized Particle Filter for Tracking Unknown.Sarkka,Aki.Vehtari,Jouko.Lampinen}@hut.fi Introduction · Rao-Blackwellized particle filtering based algorithm for tracking an unknown number of targets sequential Monte Carlo sam- pling, and the efficiency of the sampling is improved by using Rao

Särkkä, Simo

154

Information Gain-based Exploration Using Rao-Blackwellized Particle Filters

Information Gain-based Exploration Using Rao-Blackwellized Particle Filters Cyrill Stachniss a highly efficient Rao-Blackwellized particle filter to represent the posterior about maps and poses along the path taken by the robot. We furthermore describe how to utilize the properties of the Rao

Grisetti, Giorgio

155

Improving Grid-based SLAM with Rao-Blackwellized Particle Filters by Adaptive Proposals Science D-79110 Freiburg, Germany Abstract-- Recently Rao-Blackwellized particle filters have been techniques to reduce the number of particles in a Rao- Blackwellized particle filter for learning grid maps

Stachniss, Cyrill

156

For power generation with combined cycles or production of so called advanced materials by vapor phase synthesis particle separation at high temperatures is of crucial importance. There, systems working with rigid ceramic barrier filters are either of thermodynamical benefit to the process or essential for producing materials with certain properties. A hot gas filter test rig has been installed to investigate the influence of different parameters e.g. temperature, dust properties, filter media and filtration and regeneration conditions into particle separation at high temperatures. These tests were conducted both with commonly used filter candles and with filter discs made out of the same material. The filter disc is mounted at one side of the test rig. That is why both filters face the same raw gas conditions. The filter disc is flown through by a cross flow arrangement. This bases upon the conviction that for comparison of filtration characteristics of candles with filter discs or other model filters the structure of the dust cakes have to be equal. This way of conducting investigations into the influence of the above mentioned parameters on dust separation at high temperatures follows the new standard VDI 3926. There, test procedures for the characterization of filter media at ambient conditions are prescribed. The paper mainly focuses then on the influence of particle properties (e.g. stickiness etc.) upon the filtration and regeneration behavior of fly ashes with rigid ceramic filters.

Pilz, T. [Univ. of Karlsruhe (Germany). Inst. fuer Mechanische Verfahrenstechnik und Mechanik

1995-12-31

157

Design of waveguide filters by using genetically optimized frequency selective surfaces

A new optimization procedure suitable for the design of waveguide filters is presented. The filter structure consists of a frequency selective surface (FSS), placed on the transverse plane of a rectangular waveguide, so introducing a filtering behavior of the waveguide. Due to the boundary conditions imposed by the metallic waveguide walls, the FSS results to be infinite in extent, allowing

Agostino Monorchio; Giuliano Manara; Umberto Serra; Giovanni Marola; Enrico Pagana

2005-01-01

158

Optimal Filtering in Mass Transport Modeling From Satellite Gravimetry Data

NASA Astrophysics Data System (ADS)

Monitoring natural mass transport in the Earth's system, which has marked a new era in Earth observation, is largely based on the data collected by the GRACE satellite mission. Unfortunately, this mission is not free from certain limitations, two of which are especially critical. Firstly, its sensitivity is strongly anisotropic: it senses the north-south component of the mass re-distribution gradient much better than the east-west component. Secondly, it suffers from a trade-off between temporal and spatial resolution: a high (e.g., daily) temporal resolution is only possible if the spatial resolution is sacrificed. To make things even worse, the GRACE satellites enter occasionally a phase when their orbit is characterized by a short repeat period, which makes it impossible to reach a high spatial resolution at all. A way to mitigate limitations of GRACE measurements is to design optimal data processing procedures, so that all available information is fully exploited when modeling mass transport. This implies, in particular, that an unconstrained model directly derived from satellite gravimetry data needs to be optimally filtered. In principle, this can be realized with a Wiener filter, which is built on the basis of covariance matrices of noise and signal. In practice, however, a compilation of both matrices (and, therefore, of the filter itself) is not a trivial task. To build the covariance matrix of noise in a mass transport model, it is necessary to start from a realistic model of noise in the level-1B data. Furthermore, a routine satellite gravimetry data processing includes, in particular, the subtraction of nuisance signals (for instance, associated with atmosphere and ocean), for which appropriate background models are used. Such models are not error-free, which has to be taken into account when the noise covariance matrix is constructed. In addition, both signal and noise covariance matrices depend on the type of mass transport processes under investigation. For instance, processes of hydrological origin occur at short time scales, so that the input time series is typically short (1 month or less), which implies a relatively strong noise in the derived model. On the contrary, study of a long-term ice mass depletion requires a long time series of satellite data, which leads to a reduction of noise in the mass transport model. Of course, the spatial pattern (and therefore, the signal covariance matrices) of various mass transport processes are also very different. In the presented study, we compare various strategies to build the signal and noise covariance matrices in the context of mass transport modeling. In this way, we demonstrate the benefits of an accurate construction of an optimal filter as outlined above, compared to simplified strategies. Furthermore, we consider both models based on GRACE data alone and combined GRACE/GOCE models. In this way, we shed more light on a potential synergy of the GRACE and GOCE satellite mission. This is important nor only for the best possible mass transport modeling on the basis of all available data, but also for the optimal planning of future satellite gravity missions.

Ditmar, P.; Hashemi Farahani, H.; Klees, R.

2011-12-01

159

Correlation methods are becoming increasingly attractive tools for image recognition and location. This renewed interest in correlation methods is spurred by the availability of high-speed image processors and the emergence of correlation filter designs that can optimize relevant figures of merit. In this paper, a new correlation filter design method is presented that allows one to optimally tradeoff among potentially

B. V. K. Vijaya Kumar; Abhijit Mahalanobis; Alex Takessian

2000-01-01

160

Optimal linear filters are well known as a useful technique for processing extracellular recordings of neural activity. They can be tuned to respond only to a corresponding waveform template, while minimizing the energy of all other templates, and can be used to resolve spikes that are overlapping. The derivation of optimal linear multichannel filters goes back to , but the

Roland Vollgraf; Klaus Obermayer

2006-01-01

161

FIR Filter Design via Spectral Factorization and Convex Optimization

udio, spectrum shaping, ... ) upper bounds are convex in h; lower bounds are notMagnitude filter design problem involves magnitude specsClassical example: lowpass filter designlowpass filter with maximum stopband attenuation:521\\/51IS()l variables: h C R (filter coefficients),52 G R (stopband attenuation) parameters: 51 ( R (logarithmic passband ripple), n (order),Op (passband frequency), Os (stopband frequency)magnitude filter design problems are nonconvex can

Lieven Vandenberghe; Shao-po Wu; Stephen Boyd

1997-01-01

162

ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435

High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacity of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1 Code sections FC and FK[1]. (authors)

Stillo, Andrew [Camfil Farr, 1 North Corporate Drive, Riverdale, NJ 07457 (United States)] [Camfil Farr, 1 North Corporate Drive, Riverdale, NJ 07457 (United States); Ricketts, Craig I. [New Mexico State University, Department of Engineering Technology and Surveying Engineering, P.O. Box 30001 MSC 3566, Las Cruces, NM 88003-8001 (United States)] [New Mexico State University, Department of Engineering Technology and Surveying Engineering, P.O. Box 30001 MSC 3566, Las Cruces, NM 88003-8001 (United States)

2013-07-01

163

Optimization of astigmatic particle tracking velocimeters

NASA Astrophysics Data System (ADS)

Astigmatic particle tracking velocimetry (APTV) has been developed in the last years to measure the three-dimensional displacement of tracer particles using a single-camera view. The measurement principle relies on an astigmatic optical system that provides aberrated particle images with a characteristic elliptical shape univocally related to the corresponding particle depth position. Because of the precision of this method, this concept is well established for measuring and controlling the distance between a CD/DVD and the reading head. The optical arrangement of an APTV system essentially consists of a primary stigmatic optics (e.g., a microscope, or a camera objective) and an astigmatic optics, typically a cylindrical lens placed in front of the camera sensor. This paper focuses on the uncertainty of APTV in the depth direction. First, an approximated analytical model is derived and experimentally validated. From the model, a set of three non-dimensional parameters that are the most significant in the optimization of the APTV performance are identified. Finally, the effect of different parameter settings and calibration approaches are studied systematically using numerical Monte Carlo simulations. The results allow for the derivation of general criteria to minimize the overall error in APTV measurements and provide the basis for reliable uncertainty estimation for a wide range of applications.

Rossi, Massimiliano; Khler, Christian J.

2014-09-01

164

NASA Astrophysics Data System (ADS)

The quantitative performance of a ``single half-wavelength'' acoustic resonator operated at frequencies around 3 MHz as a continuous flow microparticle filter has been investigated. Standing wave acoustic radiation pressure on suspended particles (5-?m latex) drives them towards the center of the half-wavelength separation channel. Clarified suspending phase from the region closest to the filter wall is drawn away through a downstream outlet. The filtration efficiency of the device was established from continuous turbidity measurements at the filter outlet. The frequency dependence of the acoustic energy density in the aqueous particle suspension layer of the filter system was obtained by application of the transfer matrix model [H. Nowotny and E. Benes, J. Acoust. Soc. Am. 82, 513-521 (1987)]. Both the measured clearances and the calculated energy density distributions showed a maximum at the fundamental of the piezoceramic transducer and a second, significantly larger, maximum at another system's resonance not coinciding with any of the transducer or empty chamber resonances. The calculated frequency of this principal energy density maximum was in excellent agreement with the optimal clearance frequency for the four tested channel widths. The high-resolution measurements of filter performance provide, for the first time, direct verification of the matrix model predictions of the frequency dependence of acoustic energy density in the water layer.

Hawkes, Jeremy J.; Coakley, W. Terence; Grschl, Martin; Benes, Ewald; Armstrong, Sian; Tasker, Paul J.; Nowotny, Helmut

2002-03-01

165

Human Behavior-Based Particle Swarm Optimization

Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients c1 and c2 in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost. PMID:24883357

Xu, Gang; Ding, Gui-yan; Sun, Yu-bo

2014-01-01

166

In this project, a computational modeling approach for analyzing flow and ash transport and deposition in filter vessels was developed. An Eulerian-Lagrangian formulation for studying hot-gas filtration process was established. The approach uses an Eulerian analysis of gas flows in the filter vessel, and makes use of the Lagrangian trajectory analysis for the particle transport and deposition. Particular attention was given to the Siemens-Westinghouse filter vessel at Power System Development Facility in Wilsonville in Alabama. Details of hot-gas flow in this tangential flow filter vessel are evaluated. The simulation results show that the rapidly rotation flow in the spacing between the shroud and the vessel refractory acts as cyclone that leads to the removal of a large fraction of the larger particles from the gas stream. Several alternate designs for the filter vessel are considered. These include a vessel with a short shroud, a filter vessel with no shroud and a vessel with a deflector plate. The hot-gas flow and particle transport and deposition in various vessels are evaluated. The deposition patterns in various vessels are compared. It is shown that certain filter vessel designs allow for the large particles to remain suspended in the gas stream and to deposit on the filters. The presence of the larger particles in the filter cake leads to lower mechanical strength thus allowing for the back-pulse process to more easily remove the filter cake. A laboratory-scale filter vessel for testing the cold flow condition was designed and fabricated. A laser-based flow visualization technique is used and the gas flow condition in the laboratory-scale vessel was experimental studied. A computer model for the experimental vessel was also developed and the gas flow and particle transport patterns are evaluated.

Goodarz Ahmadi

2002-07-01

167

NASA Astrophysics Data System (ADS)

We assimilate satellite observations of surface chlorophyll into a three-dimensional biological ocean model in order to improve its state estimates using a particle filter referred to as sequential importance resampling (SIR). Particle Filters represent an alternative to other, more commonly used ensemble-based state estimation techniques like the ensemble Kalman filter (EnKF). Unlike the EnKF, Particle Filters do not require normality assumptions about the model error structure and are thus suitable for highly nonlinear applications. However, their application in oceanographic contexts is typically hampered by the high dimensionality of the model's state space. We apply SIR to a high-dimensional model with a small ensemble size (20) and modify the standard SIR procedure to avoid complications posed by the high dimensionality of the model state. Two extensions to the SIR include a simple smoother to deal with outliers in the observations, and state-augmentation which provides the SIR with parameter memory. Our goal is to test the feasibility of biological state estimation with SIR for realistic models. For this purpose we compare the SIR results to a model simulation with optimal parameters with respect to the same set of observations. By running replicates of our main experiments, we assess the robustness of our SIR implementation. We show that SIR is suitable for satellite data assimilation into biological models and that both extensions, the smoother and state-augmentation, are required for robust results and improved fit to the observations.

Mattern, Jann Paul; Dowd, Michael; Fennel, Katja

2013-05-01

168

Particle swarm optimization versus genetic algorithms for phased array synthesis

Particle swarm optimization is a recently invented high-performance optimizer that is very easy to understand and implement. It is similar in some ways to genetic algorithms or evolutionary algorithms, but requires less computational bookkeeping and generally only a few lines of code. In this paper, a particle swarm optimizer is implemented and compared to a genetic algorithm for phased array

Daniel W. Boeringer; Douglas H. Werner

2004-01-01

169

Sparsity Optimization in Design of Multidimensional Filter Networks

sub-filters, which is the simplest network structure of those considered in this paper, ..... iteration, a new basis element is selected of the remaining columns of this matrix following a greedy prin- ...... filtering in magnetic resonance angiography.

2014-11-22

170

Penetration of 4.5 nm to 10 ? m aerosol particles through fibrous filters

This study presents the experimental results of penetration of aerosol particles with diameters between 4.5nm and 10?m through fibrous filters. Three particle size spectrometers: the TSI 3080 electrostatic classifier equipped with nano- or long differential mobility analyzer, and the TSI 3321 aerodynamic particle sizer, were used to measure nanometer, submicron, and micron-sized particles. NaCl aerosol particles were generated by using

Sheng-Hsiu Huang; Chun-Wan Chen; Cheng-Ping Chang; Chane-Yu Lai; Chih-Chieh Chen

2007-01-01

171

Alpha CAM filter particle collection pattern study results

During a January 1991 Westinghouse Internal Audit of the WIPP Radiological Air Monitoring Program, an auditor observed that on an Eberline Alpha-6A CAM filter, some particulate was deposited outside the 25 mm diameter area that the filter is planned to use. Since the CAM uses a 25 mm diameter detector, this observation raised concern that the operational efficiency may be

S. G. Clayton; K. B. Steinbruegge; T. D. Merkling

1992-01-01

172

NASA Astrophysics Data System (ADS)

This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

Singh, R.; Verma, H. K.

2013-12-01

173

Electrostatic Deposition of Fine Particles for Fabrication of Porous Ceramic Filter

This paper describes a new process based on electrostatic dry deposition (EDD), which provides a novel and simple option for the fabrication of porous ceramic filter. Fine Al2O3 particles (average particle size less than 5 mum) suspended in gas phase were deposited on porous ceramic substrate in the form of particle chains under the action of an electrostatic field without

Guo-Feng Li; Zhi-Qiang Wang; Ning-Hui Wang

2009-01-01

174

Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization

NASA Technical Reports Server (NTRS)

The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.

Birge, Brian

2013-01-01

175

Distributed multi-sensor particle filter for bearings-only tracking

In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method

Jungen Zhang; Hongbing Ji

2011-01-01

176

Distributed multi-sensor particle filter for bearings-only tracking

In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method

Jungen Zhang; Hongbing Ji

2012-01-01

177

Comprehensive learning particle swarm optimizer for global optimization of multimodal functions

This paper presents a variant of particle swarm optimizers (PSOs) that we call the comprehensive learning particle swarm optimizer (CLPSO), which uses a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity. This strategy enables the diversity of the swarm to be preserved to discourage premature convergence. Experiments were conducted (using codes

Jing J. Liang; A. Kai Qin; Ponnuthurai Nagaratnam Suganthan; S. Baskar

2006-01-01

178

AN EFFICIENT RAO-BLACKWELLIZED PARTICLE FILTER FOR OBJECT TRACKING Elise Arnaud, Etienne Memin

AN EFFICIENT RAO-BLACKWELLIZED PARTICLE FILTER FOR OBJECT TRACKING Elise Arnaud, Etienne M principle known as Rao-Blackwellisation. Our model allows also to melt a correlation measurements with dy, conditionally on some part of the state. Such a methodology, leading to Rao-Blackwellized parti- cle filters

Boyer, Edmond

179

PARTICLE FILTER FOR UNDERWATER TERRAIN NAVIGATION Rickard Karlsson and Fredrik Gustafsson

depends on the map and sensor quality are derived. Second, a more realistic five state model is proposed than the particle filter estimate after initial transients. Simple rule of thumbs for how perfor- mance

Gustafsson, Fredrik

180

Enumerating and Disinfecting Bacteria Associated With Particles Released From GAC Filter-Adsorbers

Granular activated carbon (GAC) in filter-adsorbers provides an excellent support surface for the proliferation of microorganisms. Therefore, GAC beds may release particles of carbon with attached bacteria that are protected from disinfection. In this pilot-plant study, particles were collected from the product waters of GAC filter-adsorbers, examined for bacterial colonization, and characterized by energy-dispersive X-ray analysis. Results showed that bacteria

William T. Stringfellow; Kathryn Mallon; Francis A. DiGiano

1993-01-01

181

Implementation of Batch-Based Particle Filters for MultiSensor Tracking

In this paper, we demonstrate fixed-point FPGA implementations of state space systems using Particle Filters, especially multi-target bearing and range tracking systems. These trackers operate either as independent organic trackers or as a joint tracker to estimate a moving target's state in the x-y plane. For the efficiency of the particle filter, we consider factorized posterior approximations based on the

Rajbabu Velmurugan; Volkan Cevher; James H. McClellan

2007-01-01

182

A modified hybrid particle swarm optimization approach for unit commitment

This paper presents a new solution to thermal unit- commitment (UC) problem based on a modified hybrid particle swarm optimization (MHPSO). Hybrid real and binary PSO is coupled with the proposed heuristic based constraint satisfaction strategy that makes the solutions\\/particles feasible for PSO. The velocity equation of particle is also modified to prevent particle stagnation. Unit commitment priority is used

Le Thanh Xuan Yen; Deepak Sharma; Dipti Srinivasan; Pindoriya Naran Manji

2011-01-01

183

Filtering via Simulation: Auxiliary Particle Filters ~IK. PITT and Nan SHEPHARD

Jyseadie ~tly suuosted panicle approachto filtering time aeries.We suggestthat die alaoritbm is DO and Analysis of Econometric and FinaDcill Time aeries;"the EuropeanUnion (EU) throughtheir grant "Econometric

Wolfe, Patrick J.

184

Particle size for greatest penetration of HEPA filters - and their true efficiency

The particle size that most greatly penetrates a filter is a function of filter media construction, aerosol density, and air velocity. In this paper the published results of several experiments are compared with a modern filtration theory that predicts single-fiber efficiency and the particle size of maximum penetration. For high-efficiency particulate air (HEPA) filters used under design conditions this size is calculated to be 0.21 ..mu..m diam. This is in good agreement with the experimental data. The penetration at 0.21 ..mu..m is calculated to be seven times greater than at the 0.3 ..mu..m used for testing HEPA filters. Several mechanisms by which filters may have a lower efficiency in use than when tested are discussed.

da Roza, R.A.

1982-12-01

185

Assessing consumption of bioactive micro-particles by filter-feeding Asian carp

Silver carp Hypophthalmichthys molitrix (SVC) and bighead carp H. nobilis (BHC) have impacted waters in the US since their escape. Current chemical controls for aquatic nuisance species are non-selective. Development of a bioactive micro-particle that exploits filter-feeding habits of SVC or BHC could result in a new control tool. It is not fully understood if SVC or BHC will consume bioactive micro-particles. Two discrete trials were performed to: 1) evaluate if SVC and BHC consume the candidate micro-particle formulation; 2) determine what size they consume; 3) establish methods to evaluate consumption of filter-feeders for future experiments. Both SVC and BHC were exposed to small (50-100 ?m) and large (150-200 ?m) micro-particles in two 24-h trials. Particles in water were counted electronically and manually (microscopy). Particles on gill rakers were counted manually and intestinal tracts inspected for the presence of micro-particles. In Trial 1, both manual and electronic count data confirmed reductions of both size particles; SVC appeared to remove more small particles than large; more BHC consumed particles; SVC had fewer overall particles in their gill rakers than BHC. In Trial 2, electronic counts confirmed reductions of both size particles; both SVC and BHC consumed particles, yet more SVC consumed micro-particles compared to BHC. Of the fish that ate micro-particles, SVC consumed more than BHC. It is recommended to use multiple metrics to assess consumption of candidate micro-particles by filter-feeders when attempting to distinguish differential particle consumption. This study has implications for developing micro-particles for species-specific delivery of bioactive controls to help fisheries, provides some methods for further experiments with bioactive micro-particles, and may also have applications in aquaculture.

Jensen, Nathan R.; Amberg, Jon J.; Luoma, James A.; Walleser, Liza R.; Gaikowski, Mark P.

2012-01-01

186

Dual-core optofluidic chip for independent particle detection and tunable spectral filtering.

We present the first integration of fluidically tunable filters with a separate particle detection channel on a single planar, optofluidic chip. Two optically connected, but fluidically isolated liquid-core antiresonant reflecting optical waveguide (ARROW) segments serve as analyte and spectral filter sections, respectively. Ultrasensitive detection of fluorescent nanobeads with high signal-to-noise ratio provided by a fluidically tuned excitation notch filter is demonstrated. In addition, reconfigurable filter response is demonstrated using both core index tuning and bulk liquid tuning. Notch filters with 43 dB rejection ratio and a record 90 nm tuning range are implemented by using different mixtures of ethylene glycol and water in the filter section. Moreover, absorber dyes and liquids with pH-dependent transmission in the filter channel provide additional spectral control independent of the waveguide response. Using both core index and pH control, independent filter tuning at multiple wavelengths is demonstrated for the first time. This extensive on-chip control over spectral filtering as one of the fundamental components of optical particle detection techniques offers significant advantages in terms of compactness, cost, and simplicity, and opens new opportunities for waveguide-based optofluidic analysis systems. PMID:22864667

Ozcelik, Damla; Phillips, Brian S; Parks, Joshua W; Measor, Philip; Gulbransen, David; Hawkins, Aaron R; Schmidt, Holger

2012-10-01

187

Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization

Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization t i c l e i n f o Keywords: Fingerprint image generation Evolutionary algorithm Image filters Input pressure a b s t r a c t Constructing a fingerprint database is important to evaluate the performance

Cho, Sung-Bae

188

Optimization of the rolling-circle filter for Raman background subtraction.

A procedure is proposed to optimize a high-pass filter enabling one to subtract the broadband background signals inherent in Raman spectra. A spectral approach is used to analyze the characteristics of the filter and the distortions in the processed spectra. Examples of the processing of real spectra are presented. PMID:16608572

Brandt, N N; Brovko, O O; Chikishev, A Y; Paraschuk, O D

2006-03-01

189

Performance Optimization of a Photovoltaic Generator with an Active Power Filter Application

1 Performance Optimization of a Photovoltaic Generator with an Active Power Filter ApplicationP photovoltaic power stocks gain GPV photovoltaic generator h harmonic Range MPPT Maximum Power Point Tracking PV Generator with an Active Power Filter Application," International Journal on Engineering Applications, vol

Paris-Sud XI, Université de

190

Inertial measurement unit calibration using Full Information Maximum Likelihood Optimal Filtering

The robustness of Full Information Maximum Likelihood Optimal Filtering (FIMLOF) for inertial measurement unit (IMU) calibration in high-g centrifuge environments is considered. FIMLOF uses an approximate Newton's Method ...

Thompson, Gordon A. (Gordon Alexander)

2005-01-01

191

Bulk acoustic wave filters synthesis and optimization for multi-standard communication terminals.

This article presents a design methodology for bulk acoustic wave (BAW) filters. First, an overview of BAW physical principles, BAW filter synthesis, and the modified Butterworth-van Dyke model are addressed. Next, design and optimization methodology is presented and applied to a mixed ladder-lattice BAW bandpass filter for the Universal Mobile Telecommunications System (UMTS) TX-band at 1.95 GHz and to ladder and lattice BAW bandpass filters for the DCS1800 TX-band at 1.75 GHz. In each case, BAW filters are based on AlN resonators. UMTS filter is designed with conventional molybdenum electrodes whereas DCS filters electrodes are made with innovative iridium. PMID:20040426

Giraud, Sylvain; Bila, Stphane; Chatras, Matthieu; Cros, Dominique; Aubourg, Michel

2010-01-01

192

NASA Astrophysics Data System (ADS)

Coupled hydrogeophysical inversion aims to infer hydrological and petrophysical parameters directly from geophysical measurements. Bayesian inference is used to not only obtain estimates of the most likely model parameters, but to also obtain information on their probability distribution. Sequential Bayesian methods, like particle filters, update the posterior distributions whenever new measurements become available. In this contribution, we analyze a flooding experiment on a full-scale dike model that was observed by in-situ TDR cables and by surface ERT. In the flooding event, the water level was raised to 1 m below the dike crest over the course of two days. In each filtering step, we model the soil water content evolution with HYDRUS, translate water content into a resistivity distribution and apply an ERT forward code to obtain the system response that can be compared to the measurements. We introduce a new particle filter approach that has been improved for parameter estimation. Traditional sampling importance resampling (SIR) particle filters lack systematic exploration of the parameter space. We augment the resampling by additional differential evolutionary Metropolis steps that create new parameter proposals from the particle distribution and replace particles based on a full-path likelihood ratio. The new PArticle filter using DiffeRential Evolutionary Metropolis resampling (PADRE) provides parameter estimates that compare well with laboratory values and previous studies, but are obtained with much higher computational efficiency. Modeling and measurement errors are explicitly treated in the filtering steps and the time evolution of the parameter uncertainty can give insight on the information content of the measurements. We study the influence of the particle number and show that as few as 20-40 particles can be sufficient for the estimation of 12 parameters. The increased efficiency of the filtering approach opens avenues to fuse multiple streams of geophysical data.

Huisman, J. A.; Rings, J.; Vrugt, J. A.; Vereecken, H.

2010-12-01

193

Optimal Filters for High-Speed Compressive Detection in ...

Feb 28, 2013 ... With filters and exposure times fixed, we use the best linear unbiased estimator .... The nonnegativity of A implies that if there is a number C such that all ?? .... In Figure 2, we compare simulation using OB filters for 200s and...

2013-02-14

194

Optimal design of FIR digital filters with monotone passband response

The application of linear programming to the design of FIR digital filters with constraints on the derivative of the frequency response is described. Numerical considerations in the implementation are discussed and a program is given with examples for the design of filters with optional monotone response in passbands. The method provides the user with an additional degree of flexibility over

K. Steiglitz

1979-01-01

195

Optimal LS IIR filter design for music analysis\\/synthesis

Addresses the design of fixed, low-order infinite impulse response (IIR) filters for modeling the perceptually significant features of the spectra of string instrument bodies. The problem is stated mathematically, and the design methodologies compared here are reviewed. The experimental results are presented. The experimental set-up, data acquisition and data preprocessing are described. The spectra of the IIR filters designed using

V. L. Stonick; Dana Massie

1992-01-01

196

Fibonacci sequence, golden section, Kalman filter and optimal control

A connection between the Kalman filter and the Fibonacci sequence is developed. More precisely it is shown that, for a scalar random walk system in which the two noise sources (process and measurement noise) have equal variance, the Kalman filter's estimate turns out to be a convex linear combination of the a priori estimate and of the measurements with coefficients

Alessio Benavoli; Luigi Chisci; Alfonso Farina

2009-01-01

197

A force balance model was developed to predict the effects of particle size, particle size distribution and surface potential on the structure of the filter cake. The model predicts that a stable filter cake is formed at low surface potentials and that the filter cake becomes unstable when the surface potential is larger than 30mV. The model predicts a minimum

L. Fred Fu; Brian A. Dempsey

1998-01-01

198

Design of optimal finite wordlength FIR digital filters using integer programming techniques

The application of a general-purpose integer-programming computer program to the design of optimal finite wordlength FIR digital filters is described. Examples of two optimal low-pass FIR finite wordlength filters are given and the results are compared with the results obtained by rounding the infinite wordlength coefficients. An analysis of the approach based on the results of more than 50 design

DUSAN M. KODEK

1980-01-01

199

We propose to characterize various coding domains for the joint transform correlator. To achieve that, optimal trade- off filters have ben computed and then optimally constrained to given coding domains with an algorithm we have developed. Then, these coding domains have been evaluated in relation to the trade-offs they achieve.

Laurent Bigue; Michel Fraces; Pierre Ambs

1996-01-01

200

Optease Vena Cava Filter Optimal Indwelling Time and Retrievability

The purpose of this study was to assess the indwelling time and retrievability of the Optease IVC filter. Between 2002 and 2009, a total of 811 Optease filters were inserted: 382 for prophylaxis in multitrauma patients and 429 for patients with venous thromboembolic (VTE) disease. In 139 patients [97 men and 42 women; mean age, 36 (range, 17-82) years], filter retrieval was attempted. They were divided into two groups to compare change in retrieval policy during the years: group A, 60 patients with filter retrievals performed before December 31 2006; and group B, 79 patients with filter retrievals from January 2007 to October 2009. A total of 128 filters were successfully removed (57 in group A, and 71 in group B). The mean filter indwelling time in the study group was 25 (range, 3-122) days. In group A the mean indwelling time was 18 (range, 7-55) days and in group B 31 days (range, 8-122). There were 11 retrieval failures: 4 for inability to engage the filter hook and 7 for inability to sheathe the filter due to intimal overgrowth. The mean indwelling time of group A retrieval failures was 16 (range, 15-18) days and in group B 54 (range, 17-122) days. Mean fluoroscopy time for successful retrieval was 3.5 (range, 1-16.6) min and for retrieval failures 25.2 (range, 7.2-62) min. Attempts to retrieve the Optease filter can be performed up to 60 days, but more failures will be encountered with this approach.

Rimon, Uri, E-mail: rimonu@sheba.health.gov.il; Bensaid, Paul, E-mail: paulbensaid@hotmail.com; Golan, Gil, E-mail: gilgolan201@gmail.com; Garniek, Alexander, E-mail: garniek@gmail.com; Khaitovich, Boris, E-mail: borislena@012.net.il [Chaim Sheba Medical Center (Affiliated to the Sackler School of Medicine, Tel-Aviv University, Tel-Aviv), Department of Diagnostic Imaging (Israel); Dotan, Zohar, E-mail: Zohar.Dotan@sheba.health.gov.il [Chaim Sheba Medical Center (Affiliated to the Sackler School of Medicine, Tel-Aviv University, Tel-Aviv), Department of Urology (Israel); Konen, Eli, E-mail: Eli.Konen@sheba.health.gov.il [Chaim Sheba Medical Center (Affiliated to the Sackler School of Medicine, Tel-Aviv University, Tel-Aviv), Department of Diagnostic Imaging (Israel)

2011-06-15

201

Particle Swarm Optimization for the Design of Frequency Selective Surfaces

The particle swarm optimization (PSO) is a stochastic strategy that has recently found application to electromagnetic optimization problems. It is based on the behavior of insect swarms and exploits the solution space by taking into account the experience of the single particle as well as that of the entire swarm. This combined and synergic use of information yields a promising

Simone Genovesi; Raj Mittra; Agostino Monorchio; Giuliano Manara

2006-01-01

202

Effect of open channel filter on particle emissions of modern diesel engine.

Particle emissions of modern diesel engines are of a particular interest because of their negative health effects. The special interest is in nanosized solid particles. The effect of an open channel filter on particle emissions of a modern heavy-duty diesel engine (MAN D2066 LF31, model year 2006) was studied. Here, the authors show that the open channel filter made from metal screen efficiently reduced the number of the smallest particles and, notably, the number and mass concentration of soot particles. The filter used in this study reached 78% particle mass reduction over the European Steady Cycle. Considering the size-segregated number concentration reduction, the collection efficiency was over 95% for particles smaller than 10 nm. The diffusion is the dominant collection mechanism in small particle sizes, thus the collection efficiency decreased as particle size increased, attaining 50% at 100 nm. The overall particle number reduction was 66-99%, and for accumulation-mode particles the number concentration reduction was 62-69%, both depending on the engine load. PMID:19842323

Heikkil, Juha; Rnkk, Topi; Lhde, Tero; Lemmetty, Mikko; Arffman, Anssi; Virtanen, Annele; Keskinen, Jorma; Pirjola, Liisa; Rothe, Dieter

2009-10-01

203

Optimal filter in the frequency-time mixed domain to extract moving object

NASA Astrophysics Data System (ADS)

There are same occasions to extract the moving object from image sequence in the region of remote sensing, robot vision and so on. The process needs to have high accurate extraction and simpler realization. In this paper, we propose the design method of the optimal filter in the frequency-time mixed domain. Frequency selective filter to dynamic images usually are designed in 3-D frequency domain. But, design method of the filter is difficult because of its high parameter degree. By the use of frequency-time mixed domain(MixeD) which constitutes of 2-D frequency domain and 1-D time domain, design of filters becomes easier. But usually the desired and noise frequency component of image tend to concentrate near the origin in the frequency domain. Therefore, conventional frequency selective filters are difficult to distinguish these. We propose the optimal filter in the MixeD in the sense of least mean square error. First of all, we apply 2-D spatial Fourier to dynamic images, and at each point in 2-D frequency domain, designed FIR filtering is applied to 1-D time signal. In designing the optimal filter, we use the following information to decide the characteristics of the optimal filter. (1) The number of finite frames of input images. (2) The velocity vector of the signal desired. (3) The power spectrum of the noise signal. Signals constructed by these information are applied for the evaluation function and it decides filter coefficients. After filtering, 2-D inverse Fourier transform is applied to obtain the extracted image.

Shinmura, Hideyuki; Hiraoka, Kazuhiro; Hamada, Nozomu

2000-12-01

204

An explicit variance reduction expression for the Rao-Blackwellised particle filter

An explicit variance reduction expression for the Rao-Blackwellised particle filter Fredrik), it is possible to exploit this structure in order to obtain more accurate estimates. This has become known as Rao the standard PF per particle, it is not always beneficial to resort to Rao-Blackwellisation. For the same

Schön, Thomas

205

Target Model Estimation using Particle Filters for Visual Servoing

In this paper, we present a novel method for model es- timation for visual servoing. This method employs a parti- cle filter algorithm to estimate the depth of the image fea- tures online. A Gaussian probabilistic model is employed to model the object points in the current camera frame. A set of 3D samples drawn from the model is projected

A. H. Abdul Hafez; C. V. Jawahar

2006-01-01

206

Tag SNP selection using particle swarm optimization.

Single nucleotide polymorphisms (SNPs) are the most abundant form of genetic variations amongst species. With the genome-wide SNP discovery, many genome-wide association studies are likely to identify multiple genetic variants that are associated with complex diseases. However, genotyping all existing SNPs for a large number of samples is still challenging even though SNP arrays have been developed to facilitate the task. Therefore, it is essential to select only informative SNPs representing the original SNP distributions in the genome (tag SNP selection) for genome-wide association studies. These SNPs are usually chosen from haplotypes and called haplotype tag SNPs (htSNPs). Accordingly, the scale and cost of genotyping are expected to be largely reduced. We introduce binary particle swarm optimization (BPSO) with local search capability to improve the prediction accuracy of STAMPA. The proposed method does not rely on block partitioning of the genomic region, and consistently identified tag SNPs with higher prediction accuracy than either STAMPA or SVM/STSA. We compared the prediction accuracy and time complexity of BPSO to STAMPA and an SVM-based (SVM/STSA) method using publicly available data sets. For STAMPA and SVM/STSA, BPSO effective improved prediction accuracy for smaller and larger scale data sets. These results demonstrate that the BPSO method selects tag SNP with higher accuracy no matter the scale of data sets is used. PMID:20039435

Chuang, Li-Yeh; Yang, Cheng-San; Ho, Chang-Hsuan; Yang, Cheng-Hong

2010-01-01

207

Binary particle swarm optimization for operon prediction.

An operon is a fundamental unit of transcription and contains specific functional genes for the construction and regulation of networks at the entire genome level. The correct prediction of operons is vital for understanding gene regulations and functions in newly sequenced genomes. As experimental methods for operon detection tend to be nontrivial and time consuming, various methods for operon prediction have been proposed in the literature. In this study, a binary particle swarm optimization is used for operon prediction in bacterial genomes. The intergenic distance, participation in the same metabolic pathway, the cluster of orthologous groups, the gene length ratio and the operon length are used to design a fitness function. We trained the proper values on the Escherichia coli genome, and used the above five properties to implement feature selection. Finally, our study used the intergenic distance, metabolic pathway and the gene length ratio property to predict operons. Experimental results show that the prediction accuracy of this method reached 92.1%, 93.3% and 95.9% on the Bacillus subtilis genome, the Pseudomonas aeruginosa PA01 genome and the Staphylococcus aureus genome, respectively. This method has enabled us to predict operons with high accuracy for these three genomes, for which only limited data on the properties of the operon structure exists. PMID:20385582

Chuang, Li-Yeh; Tsai, Jui-Hung; Yang, Cheng-Hong

2010-07-01

208

1-4244-0387-1/06/$20.00 2006 IEEE APCCAS 2006 Design of Optimal Decimation and Interpolation Filters

1-4244-0387-1/06/$20.00 2006 IEEE APCCAS 2006 Design of Optimal Decimation and Interpolation for the design of optimal decimation and interpolation filters that can be utilized in a TEMG type system with a decimation filter and an interpolation filter. Simulation results are presented to demonstrate

Lu, Wu-Sheng

209

NASA Astrophysics Data System (ADS)

A new technique for reliably identifying point sources in millimeter/submillimeter wavelength maps is presented. This method accounts for the frequency dependence of noise in the Fourier domain as well as nonuniformities in the coverage of a field. This optimal filter is an improvement over commonly-used matched filters that ignore coverage gradients. Treating noise variations in the Fourier domain as well as map space is traditionally viewed as a computationally intensive problem. We show that the penalty incurred in terms of computing time is quite small due to casting many of the calculations in terms of FFTs and exploiting the absence of sharp features in the noise spectra of observations. Practical aspects of implementing the optimal filter are presented in the context of data from the AzTEC bolometer camera. The advantages of using the new filter over the standard matched filter are also addressed in terms of a typical AzTEC map.

Perera, T. A.; Wilson, G. W.; Scott, K. S.; Austermann, J. E.; Schaar, J. R.; Mancera, A.

2013-07-01

210

Culminating-point filter construction for particle points is distinguished from torus construction for wave functions in the tangent objects of their neighborhoods. Both constructions are not united by a general manifold diffeomorphism, but are united by a map of a hidden conformal $S^{1}\\times S^{3}$ charge with harmonic (Maxwell) potentials into a physical space formed by culminating points, tangent objects, and Feynman connections. The particles are obtained from three classes of eigensolutions of the homogeneous potential equations on $S^{1}\\times S^{3}$. The map of the $u(2)$ invariant vector fields into the Dirac phase factors of the connections yields the electro-weak Lagrangian with explicit mass operators for the massive leptons. The spectrum of massive particles is restricted by the small, manageable number of eigensolution classes and an instability of the model for higher mass values. This instability also defines the huge numbers of filter elements needed for the culminating points. Weinberg angle, current coupling constant, and lepton masses are calculated or estimated from the renormalization of filter properties. Consequences for particle astrophysics follow, on the one hand, from the restriction of particle classes and, on the other hand, from the suggestion of new particles from the three classes e.g. of dark matter, of a confinon for the hadrons, and of a prebaryon. Definitely excluded are e.g. SUSY constructions, Higgs particles, and a quark gluon plasma: three-piece phenoma from the confinons are always present.

E. Donth

2009-12-11

211

NASA Astrophysics Data System (ADS)

In this paper, optimization of diesel engine control parameters using a modified multi-objective particle swarm optimization (MOPSO) method is considered. This problem is formulized as a multi-objective optimization problem involving three optimization objectives: brake specific fuel consumption (BSFC), exhaust gas emission, and soot. A modified MOPSO is proposed with integration of particle swarm optimization (PSO) and a crossover approach. Several benchmark functions are tested, and results reveal that the modified MOPSO is more efficient than the typical MOPSO. Engine control parameter optimization with an extended PSO and the modified MOPSO is simulated, respectively. It proved the potential of the modified MOPSO for the engine control parameter optimization problem.

Wu, Dongmei; Ogawa, Masatoshi; Suzuki, Yasumasa; Ogai, Harutoshi; Kusaka, Jin

212

NASAL FILTERING OF FINE PARTICLES IN CHILDREN VS. ADULTS

Nasal efficiency for removing fine particles may be affected by developmental changes in nasal structure associated with age. In healthy Caucasian children (age 6-13, n=17) and adults (age 18-28, n=11) we measured the fractional deposition (DF) of fine particles (1 and 2um MMAD)...

213

The aim of this paper is to demonstrate the potential power of large-scale particle filtering for the parameter estimations of in silico biological pathways where time course measurements of biochemical reactions are observable. The method of particle filtering has been a popular technique in the field of statistical science, which approximates posterior distributions of model parameters of dynamic system by using sequentially-generated Monte Carlo samples. In order to apply the particle filtering to system identifications of biological pathways, it is often needed to explore the posterior distributions which are defined over an exceedingly high-dimensional parameter space. It is then essential to use a fairly large amount of Monte Carlo samples to obtain an approximation with a high-degree of accuracy. In this paper, we address some implementation issues on large-scale particle filtering, and then, indicate the importance of large-scale computing for parameter learning of in silico biological pathways. We have tested the ability of the particle filtering with 10(8) Monte Carlo samples on the transcription circuit of circadian clock that contains 45 unknown kinetic parameters. The proposed approach could reveal clearly the shape of the posterior distributions over the 45 dimensional parameter space. PMID:19209704

Nakamura, Kazuyuki; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru; Higuchi, Tomoyuki

2009-01-01

214

GSVD-based optimal filtering for single and multimicrophone speech enhancement

A generalized singular value decomposition (GSVD) based algorithm is proposed for enhancing multimicrophone speech signals degraded by additive colored noise. This GSVD-based multimicrophone algorithm can be considered to be an extension of the single-microphone signal subspace algorithms for enhancing noisy speech signals and amounts to a specific optimal filtering problem when the desired response signal cannot be observed. The optimal

Simon Doclo; Marc Moonen

2002-01-01

215

This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.

Sun, W Y [Lawrence Berkeley Lab., CA (United States)

1993-04-01

216

Filter performance of n99 and n95 facepiece respirators against viruses and ultrafine particles.

The performance of three filtering facepiece respirators (two models of N99 and one N95) challenged with an inert aerosol (NaCl) and three virus aerosols (enterobacteriophages MS2 and T4 and Bacillus subtilis phage)-all with significant ultrafine components-was examined using a manikin-based protocol with respirators sealed on manikins. Three inhalation flow rates, 30, 85, and 150 l min(-1), were tested. The filter penetration and the quality factor were determined. Between-respirator and within-respirator comparisons of penetration values were performed. At the most penetrating particle size (MPPS), >3% of MS2 virions penetrated through filters of both N99 models at an inhalation flow rate of 85 l min(-1). Inhalation airflow had a significant effect upon particle penetration through the tested respirator filters. The filter quality factor was found suitable for making relative performance comparisons. The MPPS for challenge aerosols was <0.1 mum in electrical mobility diameter for all tested respirators. Mean particle penetration (by count) was significantly increased when the size fraction of <0.1 mum was included as compared to particles >0.1 mum. The filtration performance of the N95 respirator approached that of the two models of N99 over the range of particle sizes tested ( approximately 0.02 to 0.5 mum). Filter penetration of the tested biological aerosols did not exceed that of inert NaCl aerosol. The results suggest that inert NaCl aerosols may generally be appropriate for modeling filter penetration of similarly sized virions. PMID:18477653

Eninger, Robert M; Honda, Takeshi; Adhikari, Atin; Heinonen-Tanski, Helvi; Reponen, Tiina; Grinshpun, Sergey A

2008-07-01

217

Removal of Particles and Acid Gases (SO2 or HCl) with a Ceramic Filter by Addition of Dry Sorbents

The present investigation intends to add to the fundamental process design know-how for dry flue gas cleaning, especially with respect to process flexibility, in cases where variations in the type of fuel and thus in concentration of contaminants in the flue gas require optimization of operating conditions. In particular, temperature effects of the physical and chemical processes occurring simultaneously in the gas-particle dispersion and in the filter cake/filter medium are investigated in order to improve the predictive capabilities for identifying optimum operating conditions. Sodium bicarbonate (NaHCO{sub 3}) and calcium hydroxide (Ca(OH){sub 2}) are known as efficient sorbents for neutralizing acid flue gas components such as HCl, HF, and SO{sub 2}. According to their physical properties (e.g. porosity, pore size) and chemical behavior (e.g. thermal decomposition, reactivity for gas-solid reactions), optimum conditions for their application vary widely. The results presented concentrate on the development of quantitative data for filtration stability and overall removal efficiency as affected by operating temperature. Experiments were performed in a small pilot unit with a ceramic filter disk of the type Dia-Schumalith 10-20 (Fig. 1, described in more detail in Hemmer 2002 and Hemmer et al. 1999), using model flue gases containing SO{sub 2} and HCl, flyash from wood bark combustion, and NaHCO{sub 3} as well as Ca(OH){sub 2} as sorbent material (particle size d{sub 50}/d{sub 84} : 35/192 {micro}m, and 3.5/16, respectively). The pilot unit consists of an entrained flow reactor (gas duct) representing the raw gas volume of a filter house and the filter disk with a filter cake, operating continuously, simulating filter cake build-up and cleaning of the filter medium by jet pulse. Temperatures varied from 200 to 600 C, sorbent stoichiometric ratios from zero to 2, inlet concentrations were on the order of 500 to 700 mg/m{sup 3}, water vapor contents ranged from zero to 20 vol%. The experimental program with NaHCO{sub 3} is listed in Table 1. In addition, model calculations were carried out based on own and published experimental results that estimate residence time and temperature effects on removal efficiencies.

Hemmer, G.; Kasper, G.; Wang, J.; Schaub, G.

2002-09-20

218

Fiber Bragg grating filter using evaporated induced self assembly of silica nano particles

NASA Astrophysics Data System (ADS)

In the present work we conduct a study of fiber filters produced by evaporation of silica particles upon a MM-fiber core. A band filter was designed and theoretically verified using a 2D Comsol simulation model of a 3D problem, and calculated in the frequency domain in respect to refractive index. The fiber filters were fabricated by stripping and chemically etching the middle part of an MM-fiber until the core was exposed. A mono layer of silica nano particles were evaporated on the core using an Evaporation Induced Self-Assembly (EISA) method. The experimental results indicated a broader bandwidth than indicated by the simulations which can be explained by the mismatch in the particle size distributions, uneven particle packing and finally by effects from multiple mode angles. Thus, there are several closely connected Bragg wavelengths that build up the broader bandwidth. The experimental part shows that it is possible by narrowing the particle size distributing and better control of the particle packing, the filter effectiveness can be greatly improved.

Hammarling, Krister; Zhang, Renyung; Manuilskiy, Anatoliy; Nilsson, Hans-Erik

2014-03-01

219

An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (? 32 cm(-1)) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm(-1). Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

Liu, Jui-Nung; Schulmerich, Matthew V; Bhargava, Rohit; Cunningham, Brian T

2011-11-21

220

An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (? 32 cm?1) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm?1. Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

Liu, Jui-Nung; Schulmerich, Matthew V.; Bhargava, Rohit; Cunningham, Brian T.

2011-01-01

221

Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications

Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154

Moccia, Antonio

2014-01-01

222

Chaotic Particle Swarm Optimization with Mutation for Classification

In this paper, a chaotic particle swarm optimization with mutation-based classifier particle swarm optimization is proposed to classify patterns of different classes in the feature space. The introduced mutation operators and chaotic sequences allows us to overcome the problem of early convergence into a local minima associated with particle swarm optimization algorithms. That is, the mutation operator sharpens the convergence and it tunes the best possible solution. Furthermore, to remove the irrelevant data and reduce the dimensionality of medical datasets, a feature selection approach using binary version of the proposed particle swarm optimization is introduced. In order to demonstrate the effectiveness of our proposed classifier, mutation-based classifier particle swarm optimization, it is checked out with three sets of data classifications namely, Wisconsin diagnostic breast cancer, Wisconsin breast cancer and heart-statlog, with different feature vector dimensions. The proposed algorithm is compared with different classifier algorithms including k-nearest neighbor, as a conventional classifier, particle swarm-classifier, genetic algorithm, and Imperialist competitive algorithm-classifier, as more sophisticated ones. The performance of each classifier was evaluated by calculating the accuracy, sensitivity, specificity and Matthews's correlation coefficient. The experimental results show that the mutation-based classifier particle swarm optimization unequivocally performs better than all the compared algorithms. PMID:25709937

Assarzadeh, Zahra; Naghsh-Nilchi, Ahmad Reza

2015-01-01

223

DETECTING CANDIDATE COSMIC BUBBLE COLLISIONS WITH OPTIMAL FILTERS

-up analysis. 1 Introduction The standard CDM concordance cosmological model is now well supported(µK) (a) Radial profile (b) Signature on the sphere (c) Matched filter Figure 1: Panels (a) and (b) show the radial profile and spherical plot, respectively, of a bubble collision signature with parameters {z0

McEwen, Jason

224

NASA Astrophysics Data System (ADS)

Facial recognition is a difficult task due to variations in pose and facial expressions, as well as presence of noise and clutter in captured face images. In this work, we address facial recognition by means of composite correlation filters designed with multi-objective combinatorial optimization. Given a large set of available face images having variations in pose, gesticulations, and global illumination, a proposed algorithm synthesizes composite correlation filters by optimization of several performance criteria. The resultant filters are able to reliably detect and correctly classify face images of different subjects even when they are corrupted with additive noise and nonhomogeneous illumination. Computer simulation results obtained with the proposed approach are presented and discussed in terms of efficiency in face detection and reliability of facial classification. These results are also compared with those obtained with existing composite filters.

Cuevas, Andres; Diaz-Ramirez, Victor H.; Kober, Vitaly; Trujillo, Leonardo

2014-09-01

225

Particle Count Statistics Applied to the Penetration of a Filter Challenged with Nanoparticles

Statistical confidence in a single measure of filter penetration (P) is dependent on the low number of particle counts made downstream of the filter. This paper discusses methods for determining an upper confidence limit (UCL) for a single measure of penetration. The magnitude of the UCL was then compared to the P value, UCL ? 2P, as a penetration acceptance criterion (PAC). This statistical method was applied to penetration trials involving an N95 filtering facepiece respirator challenged with sodium chloride and four engineered nanoparticles: titanium dioxide, iron oxide, silicon dioxide and single-walled carbon nanotubes. Ten trials were performed for each particle type with the aim of determining the most penetrating particle size (MPPS) and the maximum penetration, Pmax. The PAC was applied to the size channel containing the MPPS. With those P values that met the PAC for a given set of trials, an average Pmax and MPPS was computed together with corresponding standard deviations. Because the size distribution of the silicon dioxide aerosol was shifted towards larger particles relative to the MPPS, none of the ten trials satisfied the PAC for that aerosol. The remaining four particle types resulted in at least 4 trials meeting the criterion. MPPS values ranged from 35 53 nm with average Pmax values varying from 4.0% for titanium dioxide to 7.0% for iron oxide. The use of the penetration acceptance criterion is suggested for determining the reliability of penetration measurements obtained to determine filter Pmax and MPPS. PMID:24678138

OShaughnessy, Patrick T.; Schmoll, Linda H.

2014-01-01

226

A Distributed Particle Swarm Optimization Algorithm for Swarm Robotic Applications

We have derived a version of the particle swarm optimization algorithm that is suitable for a swarm consisting of a large number of small, mobile robots. The algorithm, called the distributed PSO (dPSO), is for \\

James M. Hereford

2006-01-01

227

Blended particle methods with adaptive subspaces for filtering turbulent dynamical systems

NASA Astrophysics Data System (ADS)

It is a major challenge throughout science and engineering to improve uncertain model predictions by utilizing noisy data sets from nature. Hybrid methods combining the advantages of traditional particle filters and the Kalman filter offer a promising direction for filtering or data assimilation in high dimensional turbulent dynamical systems. In this paper, blended particle filtering methods that exploit the physical structure of turbulent dynamical systems are developed. Non-Gaussian features of the dynamical system are captured adaptively in an evolving-in-time low dimensional subspace through particle methods, while at the same time statistics in the remaining portion of the phase space are amended by conditional Gaussian mixtures interacting with the particles. The importance of both using the adaptively evolving subspace and introducing conditional Gaussian statistics in the orthogonal part is illustrated here by simple examples. For practical implementation of the algorithms, finding the most probable distributions that characterize the statistics in the phase space as well as effective resampling strategies is discussed to handle realizability and stability issues. To test the performance of the blended algorithms, the forty dimensional Lorenz 96 system is utilized with a five dimensional subspace to run particles. The filters are tested extensively in various turbulent regimes with distinct statistics and with changing observation time frequency and both dense and sparse spatial observations. In real applications perfect dynamical models are always inaccessible considering the complexities in both modeling and computation of high dimensional turbulent system. The effects of model errors from imperfect modeling of the systems are also checked for these methods. The blended methods show uniformly high skill in both capturing non-Gaussian statistics and achieving accurate filtering results in various dynamical regimes with and without model errors.

Qi, Di; Majda, Andrew J.

2015-04-01

228

A non-linear optimal predictive control of a shunt active power filter

In this paper a nonlinear multiple-input multiple-output (MIMO) predictive control using optimal control approach is applied to control the currents of a three-phase three-wire voltage source inverter used as a shunt active power filter (AF). The nonlinear active filter state-space representation model is elaborated in the synchronous d-q frame rotating at the mains fundamental frequency. This state-space model is seen

Nassar Mendalek; Farhat Fnaiech; Kamal Al-Haddad; L.-A. Dessaint

2002-01-01

229

Efficient population utilization strategy for particle swarm optimizer.

The particle swarm optimizer (PSO) is a population-based optimization technique that can be applied to a wide range of problems. This paper presents a variation on the traditional PSO algorithm, called the efficient population utilization strategy for PSO (EPUS-PSO), adopting a population manager to significantly improve the efficiency of PSO. This is achieved by using variable particles in swarms to enhance the searching ability and drive particles more efficiently. Moreover, sharing principals are constructed to stop particles from falling into the local minimum and make the global optimal solution easier found by particles. Experiments were conducted on unimodal and multimodal test functions such as Quadric, Griewanks, Rastrigin, Ackley, and Weierstrass, with and without coordinate rotation. The results show good performance of the EPUS-PSO in solving most benchmark problems as compared to other recent variants of the PSO. PMID:19095550

Hsieh, Sheng-Ta; Sun, Tsung-Ying; Liu, Chan-Cheng; Tsai, Shang-Jeng

2009-04-01

230

Electret filters are composed of permanently charged electret fibers and are widely used in applications requiring high collection efficiency and low-pressure drop. We tested electret filter media used in manufacturing cabin air filters by applying two different charging states to the test particles. These charging states were achieved by spray electrification through the atomization process and by bipolar ionization with

J. H. Ji; G. N. Bae; S. H. Kang; J. Hwang

2003-01-01

231

Design, optimization and fabrication of an optical mode filter for integrated optics.

We present the design, optimization, fabrication and characterization of an optical mode filter, which attenuates the snaking behavior of light caused by a lateral misalignment of the input optical fiber relative to an optical circuit. The mode filter is realized as a bottleneck section inserted in an optical waveguide in front of a branching element. It is designed with Bzier curves. Its effect, which depends on the optical state of polarization, is experimentally demonstrated by investigating the equilibrium of an optical splitter, which is greatly improved however only in TM mode. The measured optical losses induced by the filter are 0.28 dB. PMID:19399117

Magnin, Vincent; Zegaoui, Malek; Harari, Joseph; Franois, Marc; Decoster, Didier

2009-04-27

232

Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization

NASA Technical Reports Server (NTRS)

The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.

Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

2002-01-01

233

Particle Swarm Optimization algorithm for facial emotion detection

Particle Swarm Optimization (PSO) algorithm has been applied and found to be efficient in many searching and optimization related applications. In this paper, we present a modified version of the algorithm that we successfully applied to facial emotion detection. Our approach is based on tracking the movements of facial action units (AUs) placed on the face of a subject and

Bashir Mohammed Ghandi; R. Nagarajan; Hazry Desa

2009-01-01

234

Particle size distribution in effluent of trickling filters and in humus tanks.

Particles and aggregates from trickling filters must be eliminated from wastewater. Usually this happens through sedimentation in humus tanks. Investigations to characterize these solids by way of particle size measurements, image analysis and particle charge measurements (zeta potential) are made within the scope of Research Center for Science and Technology "Fundamentals of Aerobic biological wastewater treatment" (SFB 411). The particle size measuring results given within this report were obtained at the Ingolstadt wastewater treatment plant, Germany, which served as an example. They have been confirmed by similar results from other facilities. Particles flushed out from trickling filters will be partially destroyed on their way to the humus tank. A large amount of small particles is to be found there. On average 90% of the particles are smaller than 30 microm. Particle size plays a decisive role in the sedimentation behaviour of solids. Small particles need sedimentation times that cannot be provided in settling tanks. As a result they cause turbidity in the final effluent. Therefore quality of sewage discharge suffers, and there are hardly advantages of the fixed film reactor treatment compared to the activated sludge process regarding sedimentation behaviour. PMID:12230184

Schubert, W; Gnthert, F W

2001-11-01

235

Optimized superficially porous particles for protein separations.

Continuing interest in larger therapeutic molecules by pharmaceutical and biotech companies provides the need for improved tools for examining these molecules both during the discovery phase and later during quality control. To meet this need, larger pore superficially porous particles with appropriate surface properties (Fused-Core() particles) have been developed with a pore size of 400 ?, allowing large molecules (<500 kDa) unrestricted access to the bonded phase. In addition, a particle size (3.4 ?m) is employed that allows high-efficiency, low-pressure separations suitable for potentially pressure-sensitive proteins. A study of the shell thickness of the new fused-core particles suggests a compromise between a short diffusion path and high efficiency versus adequate retention and mass load tolerance. In addition, superior performance for the reversed-phase separation of proteins requires that specific design properties for the bonded-phase should be incorporated. As a result, columns of the new particles with unique bonded phases show excellent stability and high compatibility with mass spectrometry-suitable mobile phases. This report includes fast separations of intact protein mixtures, as well as examples of very high-resolution separations of larger monoclonal antibody materials and associated variants. Investigations of protein recovery, sample loading and dynamic range for analysis are shown. The advantages of these new 400 ? fused-core particles, specifically designed for protein analysis, over traditional particles for protein separations are demonstrated. PMID:24094750

Schuster, Stephanie A; Wagner, Brian M; Boyes, Barry E; Kirkland, Joseph J

2013-11-01

236

A Generic Framework for Tracking Using Particle Filter With Dynamic Shape Prior

Tracking deforming objects involves estimating the global motion of the object and its local deformations as functions of time. Tracking algorithms using Kalman filters or particle filters (PFs) have been proposed for tracking such objects, but these have limitations due to the lack of dynamic shape information. In this paper, we propose a novel method based on employing a locally linear embedding in order to incorporate dynamic shape information into the particle filtering framework for tracking highly deformable objects in the presence of noise and clutter. The PF also models image statistics such as mean and variance of the given data which can be useful in obtaining proper separation of object and background. PMID:17491466

Rathi, Yogesh; Vaswani, Namrata; Tannenbaum, Allen

2013-01-01

237

A modified particle swarm optimizer with roulette selection operator

In this paper, a novel particle swarm optimizer combined with the roulette selection operator is proposed, which provides a mechanism to restrain the predominating of super particles in early stage and can effectively avoid the premature problem. We conduct variety experiments to test the proposed algorithm and compare it with other published methods on several test functions taken from the

Fang Wang; Yuhui Qiu

2005-01-01

238

A new particle swarm optimization algorithm for dynamic image clustering

In this paper, we present ACPSO a new dynamic image clustering algorithm based on particle swarm optimization. ACPSO can partition image into compact and well separated clusters without any knowledge on the real number of clusters. It uses a swarm of particles with variable number of length, which evolve dynamically using mutation operators. Experimental results on real images demonstrate that

Salima Ouadfel; Mohamed Batouche; Abdelmalik Taleb-Ahmed

2010-01-01

239

Optimizing Beam Pattern of Linear Adaptive Phase Array Antenna Based on Particle Swarm Optimization

In this paper, an innovative optimal radiation pattern of an adaptive linear array is derived by phase-only perturbations using a Particle Swarm Optimization (PSO) algorithm. An antenna array is often made as an adaptive antenna. An optimal radiation pattern design for an adaptive antenna system is not only to suppress interference by placing a null in the direction of the

Chao-Hsing Hsu; Chun-Hua Chen; Wen-Jye Shyr; Kun-Huang Kuo; Yi-Nung Chung; Tsung-Chih Lin

2010-01-01

240

High-efficiency particulate air filter test stand and aerosol generator for particle loading studies

NASA Astrophysics Data System (ADS)

This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 303029cm3 nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5to12standardm3/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7standardm3/min, high mass concentrations (25mg/m3) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions.

Arunkumar, R.; Hogancamp, Kristina U.; Parsons, Michael S.; Rogers, Donna M.; Norton, Olin P.; Nagel, Brian A.; Alderman, Steven L.; Waggoner, Charles A.

2007-08-01

241

Optimal Weights Mixed Filter for Removing Mixture of Gaussian and Impulse Noises

According to the character of Gaussian, we modify the Rank-Ordered Absolute Differences (ROAD) to Rank-Ordered Absolute Differences of mixture of Gaussian and impulse noises (ROADG). It will be more effective to detect impulse noise when the impulse is mixed with Gaussian noise. Combining rightly the ROADG with Optimal Weights Filter (OWF), we obtain a new method to deal with the mixed noise, called Optimal Weights Mixed Filter (OWMF). The simulation results show that the method is effective to remove the mixed noise.

Jin, Qiyu; Liu, Quansheng

2012-01-01

242

Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

Safak, Erdal

1989-01-01

243

Ceramic barrier filtration is a leading technology employed in hot gas filtration. Hot gases loaded with ash particle flow through the ceramic candle filters and deposit ash on their outer surface. The deposited ash is periodically removed using back pulse cleaning jet, known as surface regeneration. The cleaning done by this technique still leaves some residual ash on the filter surface, which over a period of time sinters, forms a solid cake and leads to mechanical failure of the candle filter. A room temperature testing facility (RTTF) was built to gain more insight into the surface regeneration process before testing commenced at high temperature. RTTF was instrumented to obtain pressure histories during the surface regeneration process and a high-resolution high-speed imaging system was integrated in order to obtain pictures of the surface regeneration process. The objective of this research has been to utilize the RTTF to study the surface regeneration process at the convenience of room temperature conditions. The face velocity of the fluidized gas, the regeneration pressure of the back pulse and the time to build up ash on the surface of the candle filter were identified as the important parameters to be studied. Two types of ceramic candle filters were used in the study. Each candle filter was subjected to several cycles of ash build-up followed by a thorough study of the surface regeneration process at different parametric conditions. The pressure histories in the chamber and filter system during build-up and regeneration were then analyzed. The size distribution and movement of the ash particles during the surface regeneration process was studied. Effect of each of the parameters on the performance of the regeneration process is presented. A comparative study between the two candle filters with different characteristics is presented.

Vasudevan, V.; Kang, B.S-J.; Johnson, E.K.

2002-09-19

244

Integration of GPS Precise Point Positioning and MEMS-Based INS Using Unscented Particle Filter.

Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) integrated system involves nonlinear motion state and measurement models. However, the extended Kalman filter (EKF) is commonly used as the estimation filter, which might lead to solution divergence. This is usually encountered during GPS outages, when low-cost micro-electro-mechanical sensors (MEMS) inertial sensors are used. To enhance the navigation system performance, alternatives to the standard EKF should be considered. Particle filtering (PF) is commonly considered as a nonlinear estimation technique to accommodate severe MEMS inertial sensor biases and noise behavior. However, the computation burden of PF limits its use. In this study, an improved version of PF, the unscented particle filter (UPF), is utilized, which combines the unscented Kalman filter (UKF) and PF for the integration of GPS precise point positioning and MEMS-based inertial systems. The proposed filter is examined and compared with traditional estimation filters, namely EKF, UKF and PF. Tightly coupled mechanization is adopted, which is developed in the raw GPS and INS measurement domain. Un-differenced ionosphere-free linear combinations of pseudorange and carrier-phase measurements are used for PPP. The performance of the UPF is analyzed using a real test scenario in downtown Kingston, Ontario. It is shown that the use of UPF reduces the number of samples needed to produce an accurate solution, in comparison with the traditional PF, which in turn reduces the processing time. In addition, UPF enhances the positioning accuracy by up to 15% during GPS outages, in comparison with EKF. However, all filters produce comparable results when the GPS measurement updates are available. PMID:25815446

Rabbou, Mahmoud Abd; El-Rabbany, Ahmed

2015-01-01

245

Particle Filter with State Permutations for Solving Image Jigsaw Puzzles Xingwei Yang 1

Particle Filter with State Permutations for Solving Image Jigsaw Puzzles Xingwei Yang 1 , Nagesh@temple.edu, adluru@wisc.edu, latecki@temple.edu Abstract We deal with an image jigsaw puzzle problem, which on this assumption, the posterior density over the corresponding hidden states is estimated. In the jigsaw puzzle

Latecki, Longin Jan

246

Tracking Human Position and Lower Body Parts Using Kalman and Particle Filters Constrained by

1 Tracking Human Position and Lower Body Parts Using Kalman and Particle Filters Constrained for visual tracking of human body parts is introduced. The presented approach demonstrates the feasibility of recovering human poses with data from a single uncalibrated camera using a limb tracking system based on a 2D

Nebel, Jean-Christophe

247

An extended Kalman particle filter algorithm for HRG-based srapdown attitude system

In this paper, we propose a leveling algorithm based on extended Kalman particle filter (EKPF) for compensating the initial attitude error and the accumulated error. The meaning of leveling in the paper is to acquire two attitude angles of the roll and the pitch of the aircraft during its motion. The EKPF algorithm is applied to our inertial attitude reference

Bochang Shen; Guoxing Yi; Yao Chen; Dongzhe Wang; Changhong Wang

2008-01-01

248

Burgard Abstract-- Many applications in mobile robotics and espe- cially industrial applications require from predefined locations such as work benches. As most industrial robots, due to safety restrictionsOn the Position Accuracy of Mobile Robot Localization based on Particle Filters Combined with Scan

Stachniss, Cyrill

249

Reception State Estimation of GNSS satellites in urban environment using particle filtering

Reception State Estimation of GNSS satellites in urban environment using particle filtering Donnay'Ascq, France Email: juliette.marais@inrets.fr Abstract-- The reception state of a satellite is an unavailable information for Global Navigation Satellite System receivers. His knowledge or estimation can be used

Paris-Sud XI, Universit de

250

X-RAY FLUORESCENCE ANALYSIS OF FILTER-COLLECTED AEROSOL PARTICLES

X-ray fluorescence (XRF) has become an effective technique for determining the elemental content of aerosol samples. For quantitative analysis, the aerosol particles must be collected as uniform deposits on the surface of Teflon membrane filters. An energy dispersive XRF spectrom...

251

A fast atmospheric turbulent parameters estimation using particle filtering. Application to LIDAR

A fast atmospheric turbulent parameters estimation using particle filtering. Application to LIDAR. Doppler LIDAR, is typically used to get this kind of information because it can make fast, distant on simulated Doppler LIDAR measurements, in tree-dimensional modeling. 1. Introduction In various activities

Baehr, Christophe

252

Dual Domain Auxiliary Particle Filter with Integrated Target Signature Update Colin M. Johnston in the pixel domain and modulation domain for tracking infrared targets. This dual domain approach pro- vides that the dual domain auxiliary particle filter with integrated target signature update provides a signifi- cant

Havlicek, Joebob

253

A multiobjective memetic algorithm based on particle swarm optimization.

In this paper, a new memetic algorithm (MA) for multiobjective (MO) optimization is proposed, which combines the global search ability of particle swarm optimization with a synchronous local search heuristic for directed local fine-tuning. A new particle updating strategy is proposed based upon the concept of fuzzy global-best to deal with the problem of premature convergence and diversity maintenance within the swarm. The proposed features are examined to show their individual and combined effects in MO optimization. The comparative study shows the effectiveness of the proposed MA, which produces solution sets that are highly competitive in terms of convergence, diversity, and distribution. PMID:17278557

Liu, Dasheng; Tan, K C; Goh, C K; Ho, W K

2007-02-01

254

An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images.

A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3-D optimized blockwise version of the nonlocal (NL)-means filter (Buades, et al., 2005). The NL-means filter uses the redundancy of information in the image under study to remove the noise. The performance of the NL-means filter has been already demonstrated for 2-D images, but reducing the computational burden is a critical aspect to extend the method to 3-D images. To overcome this problem, we propose improvements to reduce the computational complexity. These different improvements allow to drastically divide the computational time while preserving the performances of the NL-means filter. A fully automated and optimized version of the NL-means filter is then presented. Our contributions to the NL-means filter are: 1) an automatic tuning of the smoothing parameter; 2) a selection of the most relevant voxels; 3) a blockwise implementation; and 4) a parallelized computation. Quantitative validation was carried out on synthetic datasets generated with BrainWeb (Collins, et al., 1998). The results show that our optimized NL-means filter outperforms the classical implementation of the NL-means filter, as well as two other classical denoising methods [anisotropic diffusion (Perona and Malik, 1990)] and total variation minimization process (Rudin, et al., 1992) in terms of accuracy (measured by the peak signal-to-noise ratio) with low computation time. Finally, qualitative results on real data are presented . PMID:18390341

Coupe, P; Yger, P; Prima, S; Hellier, P; Kervrann, C; Barillot, C

2008-04-01

255

An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images

A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3D optimized blockwise version of the Non Local (NL) means filter [1]. The NL-means filter uses the redundancy of information in the image under study to remove the noise. The performance of the NL-means filter has been already demonstrated for 2D images, but reducing the computational burden is a critical aspect to extend the method to 3D images. To overcome this problem, we propose improvements to reduce the computational complexity. These different improvements allow to drastically divide the computational time while preserving the performances of the NL-means filter. A fully-automated and optimized version of the NL-means filter is then presented. Our contributions to the NL-means filter are: (a) an automatic tuning of the smoothing parameter, (b) a selection of the most relevant voxels, (c) a blockwise implementation and (d) a parallelized computation. Quantitative validation was carried out on synthetic datasets generated with BrainWeb [2]. The results show that our optimized NL-means filter outperforms the classical implementation of the NL-means filter, as well as two other classical denoising methods (Anisotropic Diffusion [3] and Total Variation minimization process [4]) in terms of accuracy (measured by the Peak Signal to Noise Ratio) with low computation time. Finally, qualitative results on real data are presented. PMID:18390341

Coup, Pierrick; Yger, Pierre; Prima, Sylvain; Hellier, Pierre; Kervrann, Charles; Barillot, Christian

2008-01-01

256

In this paper we modify the original primal-dual interior-point filter method proposed in (18) for the solution of nonlinear programming problems. We introduce two new optimality filter entries based on the objective function, and thus better suited for the purposes of minimization, and propose conditions for using inexact Hessians. We show that the global convergence properties of the method remain

RENATA SILVA; MICHAEL ULBRICH; STEFAN ULBRICH; N. VICENTE

257

Optimization of magnetic switches for single particle and cell transport

The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.

Abedini-Nassab, Roozbeh; Yellen, Benjamin B., E-mail: yellen@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Box 90300 Hudson Hall, Durham, North Carolina 27708 (United States); Joint Institute, University of MichiganShanghai Jiao Tong University, Shanghai Jiao Tong University, Shanghai 200240 (China); Murdoch, David M. [Department of Medicine, Duke University, Durham, North Carolina 27708 (United States); Kim, CheolGi [Department of Emerging Materials Science, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 711-873 (Korea, Republic of)

2014-06-28

258

Boundary filters for vector particles passing parity breaking domains

The electrodynamics supplemented with a Lorenz and CPT invariance violating Chern-Simons (CS) action (Carrol-Field-Jackiw electrodynamics) is studied when the parity-odd medium is bounded by a hyperplane separating it from the vacuum. The solutions in both half-spaces are carefully discussed and for space-like boundary stitched on the boundary with help of the Bogolubov transformations. The presence of two different Fock vacua is shown. The passage of photons and massive vector mesons through a boundary between the CS medium and the vacuum of conventional Maxwell electrodynamics is investigated. Effects of reflection from a boundary (up to the total one) are revealed when vector particles escape to vacuum and income from vacuum passing the boundary.

Kolevatov, S. S.; Andrianov, A. A. [Saint Petersburg State University, 1 ul. Ulyanovskaya, St. Petersburg, 198504 (Russian Federation)

2014-07-23

259

Boundary filters for vector particles passing parity breaking domains

NASA Astrophysics Data System (ADS)

The electrodynamics supplemented with a Lorenz and CPT invariance violating Chern-Simons (CS) action (Carrol-Field-Jackiw electrodynamics) is studied when the parity-odd medium is bounded by a hyperplane separating it from the vacuum. The solutions in both half-spaces are carefully discussed and for space-like boundary stitched on the boundary with help of the Bogolubov transformations. The presence of two different Fock vacua is shown. The passage of photons and massive vector mesons through a boundary between the CS medium and the vacuum of conventional Maxwell electrodynamics is investigated. Effects of reflection from a boundary (up to the total one) are revealed when vector particles escape to vacuum and income from vacuum passing the boundary.

Kolevatov, S. S.; Andrianov, A. A.

2014-07-01

260

NASA Astrophysics Data System (ADS)

System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.

Liu, Jie; Wang, Wilson; Ma, Fai

2011-07-01

261

The performance verification of an evolutionary canonical particle swarm optimizer.

We previously proposed to introduce evolutionary computation into particle swarm optimization (PSO), named evolutionary PSO (EPSO). It is well known that a constricted version of PSO, i.e., a canonical particle swarm optimizer (CPSO), has good convergence property compared with PSO. For further improving the search performance of an CPSO, we propose in this paper a new method called an evolutionary canonical particle swarm optimizer (ECPSO) using the meta-optimization proposed in EPSO. The ECPSO is expected to be an optimized CPSO in that optimized values of parameters are used in the CPSO. We also introduce a temporally cumulative fitness function into the ECPSO to reduce stochastic fluctuation in evaluating the fitness function. Our experimental results indicate that (1) the optimized values of parameters are quite different from those in the conventional CPSO; (2) the search performance by the ECPSO, i.e., the optimized CPSO, is superior to that by CPSO, OPSO, EPSO, and RGA/E except for the Rastrigin problem. PMID:20346858

Zhang, Hong; Ishikawa, Masumi

2010-05-01

262

Gaussian mixture sigma-point particle filter for optical indoor navigation system

NASA Astrophysics Data System (ADS)

With the fast growing and popularization of smart computing devices, there is a rise in demand for accurate and reliable indoor positioning. Recently, systems using visible light communications (VLC) technology have been considered as candidates for indoor positioning applications. A number of researchers have reported that VLC-based positioning systems could achieve position estimation accuracy in the order of centimeter. This paper proposes an Indoors navigation environment, based on visible light communications (VLC) technology. Light-emitting-diodes (LEDs), which are essentially semiconductor devices, can be easily modulated and used as transmitters within the proposed system. Positioning is realized by collecting received-signal-strength (RSS) information on the receiver side, following which least square estimation is performed to obtain the receiver position. To enable tracking of user's trajectory and reduce the effect of wild values in raw measurements, different filters are employed. In this paper, by computer simulations we have shown that Gaussian mixture Sigma-point particle filter (GM-SPPF) outperforms other filters such as basic Kalman filter and sequential importance-resampling particle filter (SIR-PF), at a reasonable computational cost.

Zhang, Weizhi; Gu, Wenjun; Chen, Chunyi; Chowdhury, M. I. S.; Kavehrad, Mohsen

2013-12-01

263

Using the innovation analysis method in the time domain, based on the autoregressive moving average (ARMA) innovation model, this paper presents a unified white noise estimation theory that includes both input and measurement white noise estimators, and presents a new steady-state optimal state estimation theory. Non-recursive optimal state estimators are given, whose recursive version gives a steady-state Kalman filter, where

Zi-Li Deng; Huan-Shui Zhang; Shu-Jun Liu; Lu Zhou

1996-01-01

264

Comparison of optimal and local search methods for designing finite wordlength FIR digital filters

This paper presents a comparison between an optimal (branch-and-bound) algorithm and a suboptimal (local search) algorithm for the design of finite wordlength finite-impulse-response (FIR) digital filters. Experimental results are described for 11 examples of length 15 to 35. It is concluded that when computer resources are not available for the optimal method, it is still worth applying the local search

D. Kodek; K. Steiglitz

1981-01-01

265

Optimizing Automated Particle Analysis for Forensic Applications

, pixel dtwell Measuring Accuracy & pixel dtwell Compositional Analysis Limits-of-detection Mapping process time #12;Major Time Sinks Stage motion Tiling, stage speed Searching Search pixel size Pixel dtwell, area Overhead QC #12;Strategies for Optimizing Stage Movement Speed up the stage

Perkins, Richard A.

266

A hierarchical particle swarm optimizer and its adaptive variant.

A hierarchical version of the particle swarm optimization (PSO) metaheuristic is introduced in this paper. In the new method called H-PSO, the particles are arranged in a dynamic hierarchy that is used to define a neighborhood structure. Depending on the quality of their so-far best-found solution, the particles move up or down the hierarchy. This gives good particles that move up in the hierarchy a larger influence on the swarm. We introduce a variant of H-PSO, in which the shape of the hierarchy is dynamically adapted during the execution of the algorithm. Another variant is to assign different behavior to the individual particles with respect to their level in the hierarchy. H-PSO and its variants are tested on a commonly used set of optimization functions and are compared to PSO using different standard neighborhood schemes. PMID:16366251

Janson, Stefan; Middendorf, Martin

2005-12-01

267

Optimization of mass spectrometers using the adaptive particle swarm algorithm.

Optimization of mass spectrometers using the adaptive particle swarm algorithm (APSA) is described along with implementations for ion optical simulations and various time-of-flight (TOF) instruments. The need for in situ self optimization is addressed through discussion of the reflectron TOF mass spectrometer (RTOF) on the European Space Agency mission Rosetta. In addition, a tool for optimization of laboratory mass spectrometers is presented and tested on two different instruments. After the application of APSA optimization, a substantial increase in performance for mass spectrometers that have manually been tuned for several weeks or months is demonstrated. PMID:22124986

Bieler, A; Altwegg, K; Hofer, L; Jckel, A; Riedo, A; Smon, T; Wahlstrm, P; Wurz, P

2011-11-01

268

The use of cylindrical candle filters to remove fine ({approx}0.005 mm) particles from hot ({approx}500- 900{degrees}C) gas streams currently is being developed for applications in advanced pressurized fluidized bed combustion (PFBC) and integrated gasification combined cycle (IGCC) technologies. Successfully deployed with hot-gas filtration, PFBC and IGCC technologies will allow the conversion of coal to electrical energy by direct passage of the filtered gases into non-ruggedized turbines and thus provide substantially greater conversion efficiencies with reduced environmental impacts. In the usual approach, one or more clusters of candle filters are suspended from a tubesheet in a pressurized (P {approx_lt}1 MPa) vessel into which hot gases and suspended particles enter, the gases pass through the walls of the cylindrical filters, and the filtered particles form a cake on the outside of each filter. The cake is then removed periodically by a backpulse of compressed air from inside the filter, which passes through the filter wall and filter cake. In various development or demonstration systems the thickness of the filter cake has proved to be an important, but unknown, process parameter. This paper describes a physical model for cake and pressure buildups between cleaning backpulses, and for longer term buildups of the ``baseline`` pressure drop, as caused by incomplete filter cleaning and/or re-entrainment. When combined with operating data and laboratory measurements of the cake porosity, the model may be used to calculate the (average) filter permeability, the filter-cake thickness and permeability, and the fraction of filter-cake left on the filter by the cleaning backpulse or re-entrained after the backpulse. When used for a variety of operating conditions (e.g., different coals, sorbents, temperatures, etc.), the model eventually may provide useful information on how the filter-cake properties depend on the various operating parameters.

Smith, D.H.; Powell, V. [USDOE Morgantown Energy Technology Center, WV (United States); Ibrahim, E. [Oak Ridge Inst. for Science and Education, TN (United States); Ferer, M. [West Virginia Univ., Morgantown, WV (United States). Dept. of Physics; Ahmadi, G. [National Research Council, Washington, DC (United States)

1996-12-31

269

NASA Astrophysics Data System (ADS)

In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.

Paasche, H.; Tronicke, J.

2012-04-01

270

Estimation of the Dynamic States of Synchronous Machines Using an Extended Particle Filter

In this paper, an extended particle filter (PF) is proposed to estimate the dynamic states of a synchronous machine using phasor measurement unit (PMU) data. A PF propagates the mean and covariance of states via Monte Carlo simulation, is easy to implement, and can be directly applied to a non-linear system with non-Gaussian noise. The extended PF modifies a basic PF to improve robustness. Using Monte Carlo simulations with practical noise and model uncertainty considerations, the extended PFs performance is evaluated and compared with the basic PF and an extended Kalman filter (EKF). The extended PF results showed high accuracy and robustness against measurement and model noise.

Zhou, Ning; Meng, Da; Lu, Shuai

2013-11-11

271

NASA Technical Reports Server (NTRS)

Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

Zaychik, Kirill B.; Cardullo, Frank M.

2012-01-01

272

In a recent paper a novel approach was presented for the restoration of canonical signed-digit (CSD) numbers to their correct format after the application of crossover and mutation operations in genetic algorithms. This paper is concerned with the development of a new technique for the optimization of FIR digital filters over the CSD coefficient space based on genetic algorithms. This

A. T. G. Fuller; B. Nowrouzian; F. Ashrafzadeh

1998-01-01

273

Approximate String Membership Checking: A Multiple Filter, Optimization-Based Approach

Approximate String Membership Checking: A Multiple Filter, Optimization-Based Approach Chong Sun 1@cs.wisc.edu Abstract-- We consider the approximate string membership checking (ASMC) problem of extracting all the strings or substrings in a document that approximately match some string in a given dictionary. To solve

Barman, Siddharth

274

Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin

Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin Stephen J the error associated with histological parameters characterizing normal skin tissue. These parameters can be recovered from digital images of the skin using a physics-based model of skin coloration. The relationship

Claridge, Ela

275

Efficient electromagnetic optimization of microwave filters and multiplexers using rational models

A method is presented for the efficient optimization of microwave filters and multiplexers designed from an ideal prototype. The method is based on the estimation of a rational function adjusted to a reduced number of samples of the microwave device response obtained either through electromagnetic analysis or measurements. From this rational function, a circuital network having the previously known topology

Alejandro Garca-Lamprez; Sergio Llorente-Romano; Magdalena Salazar-Palma; Tapan K. Sarkar

2004-01-01

276

Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

2014-01-01

277

Particle filtering for sensor-to-sensor self-calibration and motion estimation

NASA Astrophysics Data System (ADS)

This paper addresses the problem of calibrating the six degrees-of-freedom rigid body transform between a camera and an inertial measurement unit (IMU) while at the same time estimating the 3D motion of a vehicle. A high-fidelity measurement model for the camera and IMU are derived and the estimation algorithm are implemented within the particle filter (PF) framework. Belonging to the class of Monte Carlo sequential methods, the filter uses the unscented Kalman filter (UKF) to generate importance proposal distribution. It can not only avoid the limitation of the UKF which can only apply to Gaussian distribution, but also avoid the limitation of the standard PF which can not include the new measurements. Moreover, the proposed algorithm requires no additional hardware equipment. Simulation results illustrate the ill effects of misalignment on motion estimation and demonstrate accurate estimation of both the calibration parameters and the state of the vehicle.

Yang, Yafei; Li, Jianguo

2013-01-01

278

Optimizing spatial filters with kernel methods for BCI applications

NASA Astrophysics Data System (ADS)

Brain Computer Interface (BCI) is a communication or control system in which the user's messages or commands do not depend on the brain's normal output channels. The key step of BCI technology is to find a reliable method to detect the particular brain signals, such as the alpha, beta and mu components in EEG/ECOG trials, and then translate it into usable control signals. In this paper, our objective is to introduce a novel approach that is able to extract the discriminative pattern from the non-stationary EEG signals based on the common spatial patterns(CSP) analysis combined with kernel methods. The basic idea of our Kernel CSP method is performing a nonlinear form of CSP by the use of kernel methods that can efficiently compute the common and distinct components in high dimensional feature spaces related to input space by some nonlinear map. The algorithm described here is tested off-line with dataset I from the BCI Competition 2005. Our experiments show that the spatial filters employed with kernel CSP can effectively extract discriminatory information from single-trial EGOG recorded during imagined movements. The high recognition of linear discriminative rates and computational simplicity of "Kernel Trick" make it a promising method for BCI systems.

Zhang, Jiacai; Tang, Jianjun; Yao, Li

2007-11-01

279

Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude

NASA Technical Reports Server (NTRS)

Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.

Sedlak, J.

1994-01-01

280

Decoupled Control Strategy of Grid Interactive Inverter System with Optimal LCL Filter Design

NASA Astrophysics Data System (ADS)

This article presents a control strategy for a three-phase grid interactive voltage source inverter that links a renewable energy source to the utility grid through a LCL-type filter. An optimized LCL-type filter has been designed and modeled so as to reduce the current harmonics in the grid, considering the conduction and switching losses at constant modulation index (Ma). The control strategy adopted here decouples the active and reactive power loops, thus achieving desirable performance with independent control of active and reactive power injected into the grid. The startup transients can also be controlled by the implementation of this proposed control strategy: in addition to this, optimal LCL filter with lesser conduction and switching copper losses as well as core losses. A trade-off has been made between the total losses in the LCL filter and the Total Harmonic Distortion (THD%) of the grid current, and the filter inductor has been designed accordingly. In order to study the dynamic performance of the system and to confirm the analytical results, the models are simulated in the MATLAB/Simulink environment, and the results are analyzed.

Babu, B. Chitti; Anurag, Anup; Sowmya, Tontepu; Marandi, Debati; Bal, Satarupa

2013-09-01

281

A particle swarm optimization algorithm for balancing assembly lines

Purpose The purpose of this paper is to apply particle swarm optimization (PSO) a known combinatorial optimization algorithm to multi-objective (MO) balancing of large assembly lines. Design\\/methodology\\/approach A novel approach based on PSO is developed to tackle the simple assembly line balancing problem (SALBP), a well-known NP-hard production and operations management problem. Line balancing is considered for two-criteria

Dimitris I. Petropoulos; Andreas C. Nearchou

2011-01-01

282

Genetic algorithm and particle swarm optimization combined with Powell method

NASA Astrophysics Data System (ADS)

In recent years, the population algorithms are becoming increasingly robust and easy to use, based on Darwin's Theory of Evolution, perform a search for the best solution around a population that will progress according to several generations. This paper present variants of hybrid genetic algorithm - Genetic Algorithm and a bio-inspired hybrid algorithm - Particle Swarm Optimization, both combined with the local method - Powell Method. The developed methods were tested with twelve test functions from unconstrained optimization context.

Bento, David; Pinho, Diana; Pereira, Ana I.; Lima, Rui

2013-10-01

283

Design Optimization of Vena Cava Filters: An application to dual filtration devices

Pulmonary embolism (PE) is a significant medical problem that results in over 300,000 fatalities per year. A common preventative treatment for PE is the insertion of a metallic filter into the inferior vena cava that traps thrombi before they reach the lungs. The goal of this work is to use methods of mathematical modeling and design optimization to determine the configuration of trapped thrombi that minimizes the hemodynamic disruption. The resulting configuration has implications for constructing an optimally designed vena cava filter. Computational fluid dynamics is coupled with a nonlinear optimization algorithm to determine the optimal configuration of trapped model thrombus in the inferior vena cava. The location and shape of the thrombus are parameterized, and an objective function, based on wall shear stresses, determines the worthiness of a given configuration. The methods are fully automated and demonstrate the capabilities of a design optimization framework that is broadly applicable. Changes to thrombus location and shape alter the velocity contours and wall shear stress profiles significantly. For vena cava filters that trap two thrombi simultaneously, the undesirable flow dynamics past one thrombus can be mitigated by leveraging the flow past the other thrombus. Streamlining the shape of thrombus trapped along the cava wall reduces the disruption to the flow, but increases the area exposed to abnormal wall shear stress. Computer-based design optimization is a useful tool for developing vena cava filters. Characterizing and parameterizing the design requirements and constraints is essential for constructing devices that address clinical complications. In addition, formulating a well-defined objective function that quantifies clinical risks and benefits is needed for designing devices that are clinically viable.

Singer, M A; Wang, S L; Diachin, D P

2009-12-03

284

NASA Technical Reports Server (NTRS)

The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.

Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell

2012-01-01

285

Shipboard is not an absolute rigid body. Many factors could cause deformations which lead to large errors of mounted devices, especially for the navigation systems. Such errors should be estimated and compensated effectively, or they will severely reduce the navigation accuracy of the ship. In order to estimate the deformation, an unscented particle filter method for estimation of shipboard deformation based on an inertial measurement unit is presented. In this method, a nonlinear shipboard deformation model is built. Simulations demonstrated the accuracy reduction due to deformation. Then an attitude plus angular rate match mode is proposed as a frame to estimate the shipboard deformation using inertial measurement units. In this frame, for the nonlinearity of the system model, an unscented particle filter method is proposed to estimate and compensate the deformation angles. Simulations show that the proposed method gives accurate and rapid deformation estimations, which can increase navigation accuracy after compensation of deformation. PMID:24248280

Wang, Bo; Xiao, Xuan; Xia, Yuanqing; Fu, Mengyin

2013-01-01

286

Shipboard is not an absolute rigid body. Many factors could cause deformations which lead to large errors of mounted devices, especially for the navigation systems. Such errors should be estimated and compensated effectively, or they will severely reduce the navigation accuracy of the ship. In order to estimate the deformation, an unscented particle filter method for estimation of shipboard deformation based on an inertial measurement unit is presented. In this method, a nonlinear shipboard deformation model is built. Simulations demonstrated the accuracy reduction due to deformation. Then an attitude plus angular rate match mode is proposed as a frame to estimate the shipboard deformation using inertial measurement units. In this frame, for the nonlinearity of the system model, an unscented particle filter method is proposed to estimate and compensate the deformation angles. Simulations show that the proposed method gives accurate and rapid deformation estimations, which can increase navigation accuracy after compensation of deformation. PMID:24248280

Wang, Bo; Xiao, Xuan; Xia, Yuanqing; Fu, Mengyin

2013-01-01

287

NASA Technical Reports Server (NTRS)

Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.

Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel

2004-01-01

288

A fast particle filter object tracking algorithm by dual features fusion

NASA Astrophysics Data System (ADS)

Under the particle filtering framework, a video object tracking method described by dual cues extracting from integral histogram and integral image is proposed. The method takes both the color histogram feature and the Harr-like feature of the target region as the feature representation model, tracking the target region by particle filter. In the premise of ensuring the real-time responsiveness, it overcomes the shortcomings of poor precision, large fluctuations, light sensitive defects and so on by only relying on histogram feature tracking. It shows high efficiency by tracking the target object in multiple video sequences. Finally, it is applied in the augmented reality assisted maintenance prototype system, which proves that the method can be used in the tracking registration process of the augmented reality system based on natural feature.

Zhao, Shou-wei; Wang, Wei-ming; Ma, Sa-sa; Zhang, Yong; Yu, Ming

2014-11-01

289

Particle Filtering with Region-based Matching for Tracking of Partially Occluded and Scaled Targets*

Visual tracking of arbitrary targets in clutter is important for a wide range of military and civilian applications. We propose a general framework for the tracking of scaled and partially occluded targets, which do not necessarily have prominent features. The algorithm proposed in the present paper utilizes a modified normalized cross-correlation as the likelihood for a particle filter. The algorithm divides the template, selected by the user in the first video frame, into numerous patches. The matching process of these patches by particle filtering allows one to handle the targets occlusions and scaling. Experimental results with fixed rectangular templates show that the method is reliable for videos with nonstationary, noisy, and cluttered background, and provides accurate trajectories in cases of target translation, scaling, and occlusion. PMID:22506088

Nakhmani, Arie; Tannenbaum, Allen

2012-01-01

290

Particle Filters for Real-Time Fault Detection in Planetary Rovers

NASA Technical Reports Server (NTRS)

Planetary rovers provide a considerable challenge for robotic systems in that they must operate for long periods autonomously, or with relatively little intervention. To achieve this, they need to have on-board fault detection and diagnosis capabilities in order to determine the actual state of the vehicle, and decide what actions are safe to perform. Traditional model-based diagnosis techniques are not suitable for rovers due to the tight coupling between the vehicle's performance and its environment. Hybrid diagnosis using particle filters is presented as an alternative, and its strengths and weakeners are examined. We also present some extensions to particle filters that are designed to make them more suitable for use in diagnosis problems.

Dearden, Richard; Clancy, Dan; Koga, Dennis (Technical Monitor)

2001-01-01

291

A self-learning particle swarm optimizer for global optimization problems.

Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms. PMID:22067435

Li, Changhe; Yang, Shengxiang; Nguyen, Trung Thanh

2012-06-01

292

Hot gas particulate filtration is a basic component in advanced power generation systems such as Integrated Gasification Combined Cycle (IGCC) and Pressurized Fluidized Bed Combustion (PFBC). These systems require effective particulate removal to protect the downstream gas turbine and also to meet environmental emission requirements. The ceramic barrier filter is one of the options for hot gas filtration. Hot gases flow through ceramic candle filters leaving ash deposited on the outer surface of the filter. A process known as surface regeneration removes the deposited ash periodically by using a high pressure back pulse cleaning jet. After this cleaning process has been done there may be some residual ash on the filter surface. This residual ash may grow and this may lead to mechanical failure of the filter. A High Temperature Test Facility (HTTF) was built to investigate the ash characteristics during surface regeneration at high temperatures. The system is capable of conducting surface regeneration tests of a single candle filter at temperatures up to 1500 F. Details of the HTTF apparatus as well as some preliminary test results are presented in this paper. In order to obtain sequential digital images of ash particle distribution during the surface regeneration process, a high resolution, high speed image acquisition system was integrated into the HTTF system. The regeneration pressure and the transient pressure difference between the inside of the candle filter and the chamber during regeneration were measured using a high speed PC data acquisition system. The control variables for the high temperature regeneration tests were (1) face velocity, (2) pressure of the back pulse, and (3) cyclic ash built-up time.

Kang, B.S-J.; Johnson, E.K.; Rincon, J.

2002-09-19

293

For the cooling system of plastic injection mold affects significantly the productivity and quality of the finial products, the cooling system design is of great importance. In this paper, a hybrid approach combining particle swarm optimization (PSO) and genetic algorithms (GA) is developed to achieve the cooling system optimal design. Based on the finite element method (FEM) and the finite

Li Ren; WenXiao Zhang

2011-01-01

294

Preparation and optimization of the laser thin film filter

NASA Astrophysics Data System (ADS)

A co-colored thin film device for laser-induced damage threshold test system is presented in this paper, to make the laser-induced damage threshold tester operating at 532nm and 1064nm band. Through TFC simulation software, a film system of high-reflection, high -transmittance, resistance to laser damage membrane is designed and optimized. Using thermal evaporation technique to plate film, the optical properties of the coating and performance of the laser-induced damage are tested, and the reflectance and transmittance and damage threshold are measured. The results show that, the measured parameters, the reflectance R >= 98%@532nm, the transmittance T >= 98%@1064nm, the laser-induced damage threshold LIDT >= 4.5J/cm2 , meet the design requirements, which lays the foundation of achieving laser-induced damage threshold multifunction tester.

Su, Jun-hong; Wang, Wei; Xu, Jun-qi; Cheng, Yao-jin; Wang, Tao

2014-08-01

295

A sensitive technique for the measurement of dissolved and particulate actinide concentrations and water column distributions is described. Pu, Am, and Th isotopes are collected using large-volume, wire-mounted electrical pumping systems. Particles were removed by filtration, and actinides by absorption on MnO2-coated filters. The very large volumes processed (up to 4000 liters) result in very sensitive and precise concentration measurements

H. D. Livingston; J. K. Cochran

1987-01-01

296

A Mixed-State I-Particle Filter for Multi-Camera Speaker Tracking

Tracking speakers in multi-party conversations represent s an im- portant step towards automatic analysis of meetings. In thi s paper, we present a probabilistic method for audio-visual (AV) spe aker tracking in a multi-sensor meeting room. The algorithm fuse s in- formation coming from three uncalibrated cameras and a micr o- phone array via a mixed-state importance particle filter, al

Daniel Gatica-Perez; Guillaume Lathoud; Iain McCowan; Jean-Marc Odobez

2003-01-01

297

Radioactive particles are aggregates of radioactive atoms that may contain significant activity concentrations. They have been released into the environment from nuclear weapons tests, and from accidents and effluents associated with the nuclear fuel cycle. Aquatic filter-feeders can capture and potentially retain radioactive particles, which could then provide concentrated doses to nearby tissues. This study experimentally investigated the retention and effects of radioactive particles in the blue mussel, Mytilus edulis. Spent fuel particles originating from the Dounreay nuclear establishment, and collected in the field, comprised a U and Al alloy containing fission products such as (137)Cs and (90)Sr/(90)Y. Particles were introduced into mussels in suspension with plankton-food or through implantation in the extrapallial cavity. Of the particles introduced with food, 37% were retained for 70 h, and were found on the siphon or gills, with the notable exception of one particle that was ingested and found in the stomach. Particles not retained seemed to have been actively rejected and expelled by the mussels. The largest and most radioactive particle (estimated dose rate 3.18 0.06 Gyh(-1)) induced a significant increase in Comet tail-DNA %. In one case this particle caused a large white mark (suggesting necrosis) in the mantle tissue with a simultaneous increase in micronucleus frequency observed in the haemolymph collected from the muscle, implying that non-targeted effects of radiation were induced by radiation from the retained particle. White marks found in the tissue were attributed to ionising radiation and physical irritation. The results indicate that current methods used for risk assessment, based upon the absorbed dose equivalent limit and estimating the "no-effect dose" are inadequate for radioactive particle exposures. Knowledge is lacking about the ecological implications of radioactive particles released into the environment, for example potential recycling within a population, or trophic transfer in the food chain. PMID:25240099

Jaeschke, B C; Lind, O C; Bradshaw, C; Salbu, B

2015-01-01

298

Segmentation of nerve bundles and ganglia in spine MRI using particle filters.

Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bzier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741

Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina

2011-01-01

299

Segmentation of Nerve Bundles and Ganglia in Spine MRI Using Particle Filters

Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bzier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741

Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina

2011-01-01

300

Robust dead reckoning system for mobile robots based on particle filter and raw range scan.

Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318

Duan, Zhuohua; Cai, Zixing; Min, Huaqing

2014-01-01

301

Creating Protein Models from Electron-Density Maps using Particle-Filtering Methods

Motivation One bottleneck in high-throughput protein crystallography is interpreting an electron-density map; that is, fitting a molecular model to the 3D picture crystallography produces. Previously, we developed Acmi, an algorithm that uses a probabilistic model to infer an accurate protein backbone layout. Here we use a sampling method known as particle filtering to produce a set of all-atom protein models. We use the output of Acmi to guide the particle filter's sampling, producing an accurate, physically feasible set of structures. Results We test our algorithm on ten poor-quality experimental density maps. We show that particle filtering produces accurate all-atom models, resulting in fewer chains, lower sidechain RMS error, and reduced R factor, compared to simply placing the best-matching sidechains on Acmi's trace. We show that our approach produces a more accurate model than three leading methods Textal, Resolve, and ARP/wARP in terms of main chain completeness, sidechain identification, and crystallographic R factor. PMID:17933855

Kondrashov, Dmitry A.; Bitto, Eduard; Soni, Ameet; Bingman, Craig A.; Phillips, George N.; Shavlik, Jude W.

2008-01-01

302

Robust Dead Reckoning System for Mobile Robots Based on Particle Filter and Raw Range Scan

Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318

Duan, Zhuohua; Cai, Zixing; Min, Huaqing

2014-01-01

303

Permutation flow shop scheduling: Fuzzy particle swarm optimization approach

A fuzzy particle swarm optimization (PSO) for the minimization of makespan in permutation flow shop scheduling problem is presented in this paper. In the proposed fuzzy PSO, the inertia weight of PSO and the control parameter of the cross- mutated operation are determined by a set of fuzzy rules. To escape the local optimum, cross-mutated operation is introduced. In order

Sai Ho Ling; Frank Jiang; Hung T. Nguyen; Kit Yan Chan

2011-01-01

304

Particle swarm optimization for reconfigurable phase-differentiated array design

Multiple-beam antenna arrays have important applica- tions in communications and radar. This paper describes a method of designing a reconfigurable dual-beam antenna array using a new evolu- tionary algorithm called particle swarm optimization (PSO). The design problem is to find element excitations that will result in a sector pattern main beam with low side lobes with the additional requirement that

Dennis Gies; Yahya Rahmat-Samii

2003-01-01

305

Particle swarm optimized multiple regression linear model for data classification

This paper presents a new data classification method based on particle swarm optimization (PSO) techniques. The paper discusses the building of a classifier model based on multiple regression linear approach. The coefficients of multiple regression linear models (MRLMs) are estimated using least square estimation technique and PSO techniques for percentage of correct classification performance comparisons. The mathematical models are developed

Suresh Chandra Satapathy; J. V. R. Murthy; P. V. G. D. Prasad Reddy; Bijan Bihari Misra; Pradipta K. Dash; Ganapati Panda

2009-01-01

306

Particle Swarm Optimization of High-frequency Transformer

Particle Swarm Optimization of High-frequency Transformer Hengsi Qin, Jonathan W. Kimball, Ganesh K of Science and Technology, Rolla, MO 65409-0040 USA Abstract--A high frequency transformer is a critical transformer. Operation of a DAB converter requires its transformer to have a specific amount of winding

Kimball, Jonathan W.

307

Control of a flexible plate structure using particle swarm optimization

An investigation on control mechanism using particle swarm optimization (PSO) to suppress the vibration of flexible plate has been carried out. Active vibration control (AVC) is implemented for the case of single-input single output (SISO), and the controller is realized in linear parametric form where all parameters are arbitrarily chosen by applying the working mechanism of PSO. The objective function

Sabariah Julai; M. O. Tokhi; Maziah Mohamad; Idris Abd Latiff

2009-01-01

308

Optimization of Particle-in-Cell Codes on RISC Processors

NASA Technical Reports Server (NTRS)

General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

1996-01-01

309

ORIGINAL PAPER Particle swarm optimization with deliberate loss of information

ORIGINAL PAPER Particle swarm optimization with deliberate loss of information C. A. Voglis · K. E of this model was theoretically justified, C. A. Voglis Á K. E. Parsopoulos (&) Á I. E. Lagaris Department of Computer Science, University of Ioannina, Ioannina, Greece e-mail: kostasp@cs.uoi.gr C. A. Voglis e

Lagaris, Isaac

310

An effective particle swarm optimization method for data clustering

Data clustering analysis is generally applied to image processing, customer relationship management and product family construction. This paper applied particle swarm optimization (PSO) algorithm on data clustering problems. Two reflex schemes are implemented on PSO algorithm to improve the efficiency. The proposed methods were tested on seven datasets, and their performance is compared with those of PSO, K-means and two

I. W. Kao; C. Y. Tsai; Y. C. Wang

2007-01-01

311

Particle Swarm Optimization: Dynamic parameter adjustment using swarm activity

In this paper, swarm activity, which is a new index for assessing the diversification (global search) and intensification (local search) during particle swarm optimization (PSO) searches, is introduced. It is shown that swarm activity allows the quantitative assessment of the diversification and intensification during the PSO search. Using this concept, a new PSO called activity feedback PSO (AFPSO) is constructed,

Nobuhiro Iwasaki; Keiichiro Yasuda; Genki Ueno

2008-01-01

312

Optimization of bandpass optical filters based on TiO2 nanolayers

NASA Astrophysics Data System (ADS)

The design and realization of high-quality bandpass optical filters are often very difficult tasks due to the strong correlation of the optical index of dielectric thin films to their final thickness, as observed in many industrial deposition processes. We report on the optimization of complex optical filters in the visible and NIR spectral ranges as realized by ion beam-assisted electron beam deposition of silica and titanium oxide multilayers. We show that this process always leads to amorphous films prior to thermal annealing. On the contrary, the optical dispersion of TiO2 nanolayers is highly dependent on their thickness, while this dependence vanishes for layers thicker than 100 nm. We demonstrate that accounting for this nonlinear dependence of the optical index is both very important and necessary in order to obtain high-quality optical filters.

Dmarest, Nathalie; Deubel, Damien; Keromns, Jean-Claude; Vaudry, Claude; Grasset, Fabien; Lefort, Ronan; Guilloux-Viry, Maryline

2015-01-01

313

Optimal Design of CSD Coefficient FIR Filters Subject to Number of Nonzero Digits

NASA Astrophysics Data System (ADS)

In a hardware implementation of FIR(Finite Impulse Response) digital filters, it is desired to reduce a total number of nonzero digits used for a representation of filter coefficients. In general, a design problem of FIR filters with CSD(Canonic Signed Digit) representation, which is efficient one for the reduction of numbers of multiplier units, is often considered as one of the 0-1 combinational problems. In such the problem, some difficult constraints make us prevent to linearize the problem. Although many kinds of heuristic approaches have been applied to solve the problem, the solution obtained by such a manner could not guarantee its optimality. In this paper, we attempt to formulate the design problem as the 0-1 mixed integer linear programming problem and solve it by using the branch and bound technique, which is a powerful method for solving integer programming problem. Several design examples are shown to present an efficient performance of the proposed method.

Ozaki, Yuichi; Suyama, Kenji

314

NASA Astrophysics Data System (ADS)

We report the development of optimized fluorescent dye-doped tracer particles for gas-phase particle image velocimetry (PIV) and their use to eliminate 'flare' from the images obtained. In such applications, micron-sized tracer particles are normally required to accurately follow the flow. However, as the tracer size is reduced the amount of light incident on the particle diminishes and consequently the intensity of emitted light (fluorescence). Hence, there is a requirement to identify dyes with high quantum yield that can be dissolved in conventional tracer media at high concentrations. We describe the selection and characterization of a highly fluorescent blue-emitting dye, Bis-MSB, using a novel method, employing stabilized micro-emulsions, to emulate the fluorescence properties of tracer particles. We present the results of PIV experiments, using 1 m tracer particles of o-xylene doped with Bis-MSB, in which elastically scattered 'flare' has been successfully removed from the images using an appropriate optical filter.

Chennaoui, M.; Angarita-Jaimes, D.; Ormsby, M. P.; Angarita-Jaimes, N.; McGhee, E.; Towers, C. E.; Jones, A. C.; Towers, D. P.

2008-11-01

315

NASA Astrophysics Data System (ADS)

In this study, we investigate the effect of "biased sampling," i.e., the clustering of inertial particles in regions of the flow with low vorticity, and "filtering," i.e., the tendency of inertial particles to attenuate the fluid velocity fluctuations, on the probability density function of inertial particle accelerations. In particular, we find that the concept of "biased filtering" introduced by Ayyalasomayajula et al. ["Modeling inertial particle acceleration statistics in isotropic turbulence," Phys. Fluids 20, 0945104 (2008), 10.1063/1.2976174], in which particles filter stronger acceleration events more than weaker ones, is relevant to the higher order moments of acceleration. Flow topology and its connection to acceleration is explored through invariants of the velocity-gradient, strain-rate, and rotation-rate tensors. A semi-quantitative analysis is performed where we assess the contribution of specific flow topologies to acceleration moments. Our findings show that the contributions of regions of high vorticity and low strain decrease significantly with Stokes number, a non-dimensional measure of particle inertia. The contribution from regions of low vorticity and high strain exhibits a peak at a Stokes number of approximately 0.2. Following the methodology of Ooi et al. ["A study of the evolution and characteristics of the invariants of the velocity-gradient tensor in isotropic turbulence," J. Fluid Mech. 381, 141 (1999), 10.1017/S0022112098003681], we compute mean conditional trajectories in planes formed by pairs of tensor invariants in time. Among the interesting findings is the existence of a stable focus in the plane formed by the second invariants of the strain-rate and rotation-rate tensors. Contradicting the results of Ooi et al., we find a stable focus in the plane formed by the second and third invariants of the strain-rate tensor for fluid tracers. We confirm, at an even higher Reynolds number, the conjecture of Collins and Keswani ["Reynolds number scaling of particle clustering in turbulent aerosols," New J. Phys. 6, 119 (2004), 10.1088/1367-2630/6/1/119] that inertial particle clustering saturates at large Reynolds numbers. The result is supported by the theory presented in Chun et al. ["Clustering of aerosol particles in isotropic turbulence," J. Fluid Mech. 536, 219 (2005), 10.1017/S0022112005004568].

Salazar, Juan P. L. C.; Collins, Lance R.

2012-08-01

316

A RANGE-ONLY MULTIPLE TARGET PARTICLE FILTER TRACKER Volkan Cevher, Rajbabu Velmurugan, and James H filter tracker to track multiple maneuvering targets using a batch of range measurements. The state is proved using geometrical arguments. The data likelihood treats the range observations as an image using

Cevher, Volkan

317

Optimal Control for a Parallel Hybrid Hydraulic Excavator Using Particle Swarm Optimization

Optimal control using particle swarm optimization (PSO) is put forward in a parallel hybrid hydraulic excavator (PHHE). A power-train mathematical model of PHHE is illustrated along with the analysis of components' parameters. Then, the optimal control problem is addressed, and PSO algorithm is introduced to deal with this nonlinear optimal problem which contains lots of inequality/equality constraints. Then, the comparisons between the optimal control and rule-based one are made, and the results show that hybrids with the optimal control would increase fuel economy. Although PSO algorithm is off-line optimization, still it would bring performance benchmark for PHHE and also help have a deep insight into hybrid excavators. PMID:23818832

Wang, Dong-yun; Guan, Chen

2013-01-01

318

Background Malaria remains a major cause of morbidity and mortality worldwide. Flow cytometry-based assays that take advantage of fluorescent protein (FP)-expressing malaria parasites have proven to be valuable tools for quantification and sorting of specific subpopulations of parasite-infected red blood cells. However, identification of rare subpopulations of parasites using green fluorescent protein (GFP) labelling is complicated by autofluorescence (AF) of red blood cells and low signal from transgenic parasites. It has been suggested that cell sorting yield could be improved by using filters that precisely match the emission spectrum of GFP. Methods Detection of transgenic Plasmodium falciparum parasites expressing either tdTomato or GFP was performed using a flow cytometer with interchangeable optical filters. Parasitaemia was evaluated using different optical filters and, after optimization of optics, the GFP-expressing parasites were sorted and analysed by microscopy after cytospin preparation and by imaging cytometry. Results A new approach to evaluate filter performance in flow cytometry using two-dimensional dot blot was developed. By selecting optical filters with narrow bandpass (BP) and maximum position of filter emission close to GFP maximum emission in the FL1 channel (510/20, 512/20 and 517/20; dichroics 502LP and 466LP), AF was markedly decreased and signal-background improve dramatically. Sorting of GFP-expressing parasite populations in infected red blood cells at 90 or 95% purity with these filters resulted in 50-150% increased yield when compared to the standard filter set-up. The purity of the sorted population was confirmed using imaging cytometry and microscopy of cytospin preparations of sorted red blood cells infected with transgenic malaria parasites. Discussion Filter optimization is particularly important for applications where the FP signal and percentage of positive events are relatively low, such as analysis of parasite-infected samples with in the intention of gene-expression profiling and analysis. The approach outlined here results in substantially improved yield of GFP-expressing parasites, and requires decreased sorting time in comparison to standard methods. It is anticipated that this protocol will be useful for a wide range of applications involving rare events. PMID:22950515

2012-01-01

319

Filter feeders and plankton increase particle encounter rates through flow regime control

Collisions between particles or between particles and other objects are fundamental to many processes that we take for granted. They drive the functioning of aquatic ecosystems, the onset of rain and snow precipitation, and the manufacture of pharmaceuticals, powders and crystals. Here, I show that the traditional assumption that viscosity dominates these situations leads to consistent and large-scale underestimation of encounter rates between particles and of deposition rates on surfaces. Numerical simulations reveal that the encounter rate is Reynolds number dependent and that encounter efficiencies are consistent with the sparse experimental data. This extension of aerosol theory has great implications for understanding of selection pressure on the physiology and ecology of organisms, for example filter feeders able to gather food at rates up to 5 times higher than expected. I provide evidence that filter feeders have been strongly selected to take advantage of this flow regime and show that both the predicted peak concentration and the steady-state concentrations of plankton during blooms are ?33% of that predicted by the current models of particle encounter. Many ecological and industrial processes may be operating at substantially greater rates than currently assumed. PMID:19416879

Humphries, Stuart

2009-01-01

320

NASA Astrophysics Data System (ADS)

Diesel particle filters have become widely used in the United States since the introduction in 2007 of a more stringent exhaust particulate matter emission standard for new heavy-duty diesel vehicle engines. California has instituted additional regulations requiring retrofit or replacement of older in-use engines to accelerate emission reductions and air quality improvements. This presentation summarizes pollutant emission changes measured over several field campaigns at the Port of Oakland in the San Francisco Bay Area associated with diesel particulate filter use and accelerated modernization of the heavy-duty truck fleet. Pollutants in the exhaust plumes of hundreds of heavy-duty trucks en route to the Port were measured in 2009, 2010, 2011, and 2013. Ultrafine particle number, black carbon (BC), nitrogen oxides (NOx), and nitrogen dioxide (NO2) concentrations were measured at a frequency ? 1 Hz and normalized to measured carbon dioxide concentrations to quantify fuel-based emission factors (grams of pollutant emitted per kilogram of diesel consumed). The size distribution of particles in truck exhaust plumes was also measured at 1 Hz. In the two most recent campaigns, emissions were linked on a truck-by-truck basis to installed emission control equipment via the matching of transcribed license plates to a Port truck database. Accelerated replacement of older engines with newer engines and retrofit of trucks with diesel particle filters reduced fleet-average emissions of BC and NOx. Preliminary results from the two most recent field campaigns indicate that trucks without diesel particle filters emit 4 times more BC than filter-equipped trucks. Diesel particle filters increase emissions of NO2, however, and filter-equipped trucks have NO2/NOx ratios that are 4 to 7 times greater than trucks without filters. Preliminary findings related to particle size distribution indicate that (a) most trucks emitted particles characterized by a single mode of approximately 100 nm in diameter and (b) new trucks originally equipped with diesel particle filters were 5 to 6 times more likely than filter-retrofitted trucks and trucks without filters to emit particles characterized by a single mode in the range of 10 to 30 nm in diameter.

Kirchstetter, T.; Preble, C.; Dallmann, T. R.; DeMartini, S. J.; Tang, N. W.; Kreisberg, N. M.; Hering, S. V.; Harley, R. A.

2013-12-01

321

In a penicillin fermentation process, substrate concentration and biomass concentration greatly influence the yield of the targeted product. However, there are few on-line sensors available to measure these variables in real-time. In this paper, a compact mechanism model is employed to simulate the fed-batch process, and a particle filter is introduced to estimate the substrate and biomass states. Particle filters

Zhonggai Zhao; Xinguang Shao; Biao Huang; Fei Liu

2011-01-01

322

Choosing the Optimal Clipping Ratio for Clipping and Filtering PAR-Reduction Scheme in OFDM

Clipping and Filtering on the Oversampled signal samples (CFO) is a simply and effective peak-to-average power ratio (PAR) reduction method for OFDM signal. However, the PAR- reduction performance and the bit error ratio (BER) performance of CFO are conflicting with each other. An analysis framework is proposed to select the optimum clipping ratio (CR) which optimizes the consumed power-to-noise ratio

Hua Yu; Gang Wei

2007-01-01

323

Polynomial systems approach to continuous-time weighted optimal linear filtering and prediction

The solution of the optimal weighted minimum-variance estimation problem is considered using a polynomial matrix description for the continuous-time linear system description, which allows for the possible presence of transport delays on the measurements. The filter or predictor is given by the solution of two diophantine equations and is equivalent (in the delay-free case) to the state equation form of

M. J. Grimble

1998-01-01

324

Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination

NASA Technical Reports Server (NTRS)

Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.

Downie, John D.

1992-01-01

325

Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

NASA Technical Reports Server (NTRS)

Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using micro-lithographic techniques and used ir spectral imaging applications will be presented.

Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

1999-01-01

326

Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

NASA Technical Reports Server (NTRS)

Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using microlithographic techniques and used in spectral imaging applications will be presented.

Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

1998-01-01

327

Objectives Quantifying testicular homogenization resistant spermatid heads (HRSH) is a powerful indicator of spermatogenesis. These counts have traditionally been performed manually using a hemocytometer, but this method can be time consuming and biased. We aimed to develop a protocol to reduce debris for the application of automated counting, which would allow for efficient and unbiased quantification of rat HRSH. Findings We developed a filter-lysis protocol that effectively removes debris from rat testicular homogenates. After filtering and lysing the homogenates, we found no statistical differences between manual (classic and filter-lysis) and automated (filter-lysis) counts using one-way ANOVA with Bonferronis multiple comparison test. In addition, Pearsons correlation coefficients were calculated to compare the counting methods and there was a strong correlation between the classic manual counts and the filter-lysis manual (r = 0.85, p = 0.002) and the filter-lysis automated (r = 0.89, p = 0.0005) counts. We also tested the utility of the automated method in a low dose exposure model known to decrease HRSH. Adult Fischer 344 rats exposed to 0.33% 2,5-hexanedione (HD) in the drinking water for 12 weeks demonstrated decreased body (p = 0.02) and testes (p = 0.002) weights. In addition, there was a significant reduction in the number of HRSH per testis (p = 0.002) when compared to control. Conclusions A filter-lysis protocol was optimized to purify rat testicular homogenates for automated HRSH counts. Automated counting systems yield unbiased data and can be applied to detect changes in the testis after low dose toxicant exposure. PMID:22240558

Pacheco, Sara E.; Anderson, Linnea M.; Boekelheide, Kim

2013-01-01

328

Fractional particle swarm optimization in multidimensional search space.

In this paper, we propose two novel techniques, which successfully address several major problems in the field of particle swarm optimization (PSO) and promise a significant breakthrough over complex multimodal optimization problems at high dimensions. The first one, which is the so-called multidimensional (MD) PSO, re-forms the native structure of swarm particles in such a way that they can make interdimensional passes with a dedicated dimensional PSO process. Therefore, in an MD search space, where the optimum dimension is unknown, swarm particles can seek both positional and dimensional optima. This eventually removes the necessity of setting a fixed dimension a priori, which is a common drawback for the family of swarm optimizers. Nevertheless, MD PSO is still susceptible to premature convergences due to lack of divergence. Among many PSO variants in the literature, none yields a robust solution, particularly over multimodal complex problems at high dimensions. To address this problem, we propose the fractional global best formation (FGBF) technique, which basically collects all the best dimensional components and fractionally creates an artificial global best (aGB) particle that has the potential to be a better "guide" than the PSO's native gbest particle. This way, the potential diversity that is present among the dimensions of swarm particles can be efficiently used within the aGB particle. We investigated both individual and mutual applications of the proposed techniques over the following two well-known domains: 1) nonlinear function minimization and 2) data clustering. An extensive set of experiments shows that in both application domains, MD PSO with FGBF exhibits an impressive speed gain and converges to the global optima at the true dimension regardless of the search space dimension, swarm size, and the complexity of the problem. PMID:19661007

Kiranyaz, Serkan; Ince, Turker; Yildirim, Alper; Gabbouj, Moncef

2010-04-01

329

A novel chaos particle swarm optimization (PSO) and its application in pavement maintance decision

Particle swarm optimization (PSO) algorithm is a new random global optimization algorithm, and the simple PSO (SPSO) is short of high convergence speed, strong optimization ability and so on. To improve the optimization property of SPSO, a novel chaos particle swarm optimization (CPSO) algorithm is presented. The characteristics of ergodicity and randomness of chaotic variables are considered to produce the

Yi Shen; Yunfeng Bu; Mingxin Yuan

2009-01-01

330

NASA Astrophysics Data System (ADS)

In this paper, a novel hybrid algorithm featuring a simple index modulation profile with fast-converging optimization is proposed towards the design of dense wavelength-division-multiplexing systems (DWDM) multichannel fiber Bragg grating (FBG) filters. The approach is based on utilizing one of other FBG design approaches that may suffer from spectral distortion as the first step, then performing Lagrange multiplier optimization (LMO) for optimized correction of the spectral distortion. In our design examples, the superposition method is employed as the first design step for its merits of easy fabrication, and the discrete layer-peeling (DLP) algorithm is used to rapidly obtain the initial index modulation profiles for the superposition method. On account of the initially near-optimum index modulation profiles from the first step, the LMO optimization algorithm shows fast convergence to the target reflection spectra in the second step and the design outcome still retains the advantage of easy fabrication.

Hsin, Chen-Wei

2011-07-01

331

Based on a cascaded Kalman-Particle Filtering, gyroscope drift and robot attitude estimation method is proposed in this paper. Due to noisy and erroneous measurements of MEMS gyroscope, it is combined with Photogrammetry based vision navigation scenario. Quaternions kinematics and robot angular velocity dynamics with augmented drift dynamics of gyroscope are employed as system state space model. Nonlinear attitude kinematics, drift and robot angular movement dynamics each in 3 dimensions result in a nonlinear high dimensional system. To reduce the complexity, we propose a decomposition of system to cascaded subsystems and then design separate cascaded observers. This design leads to an easier tuning and more precise debugging from the perspective of programming and such a setting is well suited for a cooperative modular system with noticeably reduced computation time. Kalman Filtering (KF) is employed for the linear and Gaussian subsystem consisting of angular velocity and drift dynamics together with gyroscope measurement. The estimated angular velocity is utilized as input of the second Particle Filtering (PF) based observer in two scenarios of stochastic and deterministic inputs. Simulation results are provided to show the efficiency of the proposed method. Moreover, the experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. PMID:24342270

Sadaghzadeh N, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin

2014-03-01

332

NASA Astrophysics Data System (ADS)

Underground flow systems, such as oil or gas reservoirs and CO2 storage sites, are an important and challenging class of complex dynamic systems. Lacking information about distributed systems properties (such as porosity, permeability,...) leads to model uncertainties up to a level where quantification of uncertainties may become the dominant question in application tasks. History matching to past production data becomes an extremely important issue in order to improve the confidence of prediction. The accuracy of history matching depends on the quality of the established physical model (including, e.g. seismic, geological and hydrodynamic characteristics, fluid properties etc). The history matching procedure itself is very time consuming from the computational point of view. Even one single forward deterministic simulation may require parallel high-performance computing. This fact makes a brute-force non-linear optimization approach not feasible, especially for large-scale simulations. We present a novel framework for history matching which takes into consideration the nonlinearity of the model and of inversion, and provides a cheap but highly accurate tool for reducing prediction uncertainty. We propose an advanced framework for history matching based on the polynomial chaos expansion (PCE). Our framework reduces complex reservoir models and consists of two main steps. In step one, the original model is projected onto a so-called integrative response surface via very recent PCE technique. This projection is totally non-intrusive (following a probabilistic collocation method) and optimally constructed for available reservoir data at the prior stage of Bayesian updating. The integrative response surface keeps the nonlinearity of the initial model at high order and incorporates all suitable parameters, such as uncertain parameters (porosity, permeability etc.) and design or control variables (injection rate, depth etc.). Technically, the computational costs for constructing the response surface depend on the number of parameters and the expansion degree. Step two consists of Bayesian updating in order to match the reduced model to available measurements of state variables or other past or real-time observations of system behavior (e.g. past production data or pressure at monitoring wells during a certain time period). In step 2 we apply particle filtering on the integrative response surface constructed at step one. Particle filtering is a strong technique for Bayesian updating which takes into consideration the nonlinearity of inverse problem in history matching more accurately than Ensemble Kalman filter do. Thanks to the computational efficiency of PCE and integrative response surface, Bayesian updating for history matching becomes an interactive task and can incorporate real time measurements.

Oladyshkin, S.; Class, H.; Helmig, R.; Nowak, W.

2011-12-01

333

Numerical experiments with an implicit particle filter for the shallow water equations

NASA Astrophysics Data System (ADS)

The estimation of initial conditions for the shallow water equations for a given set of later data is a well known test problem for data assimilation codes. A popular approach to this problem is the variational method (4D-Var), i.e. the computation of the mode of the posterior probability density function (pdf) via the adjoint technique. Here, we improve on 4D-Var by computing the conditional mean (the minimum least square error estimator) rather than the mode (a biased estimator) and we do so with implicit sampling, a Monte Carlo (MC) importance sampling method. The idea in implicit sampling is to first search for the high-probability region of the posterior pdf and then to find samples in this region. Because the samples are concentrated in the high-probability region, fewer samples are required than with competing MC schemes. The search for the high-probability region can be implemented by a minimization that is very similar to the minimization in 4D-Var, and we make use of a 4D-Var code in our implementation. The samples are obtained by solving algebraic equations with a random right-hand-side. These equations can be solved efficiently, so that the additional cost of our approach, compared to traditional 4D-Var, is small. The long-term goal is to assimilate experimental data, obtained with the CORIOLIS turntable in Grenoble (France), to study the drift of a vortex. We present results from numerical twin experiments as a first step towards our long-term goal. We discretize the shallow water equations on a square domain (2.5m 2.5m) using finite differences on a staggered grid of size 28 28 and a fourth order Runge-Kutta. We assume open boundary conditions and estimate the initial state (velocities and surface height) given noisy observations of the state. We solve the optimization problem using a 4D-Var code that relies on a L-BFGS method; the random algebraic equations are solved with random maps, i.e. we look for solutions in given, but random, directions of the state space. In our numerical experiments, we varied the availability of the data (in both space and time) as well as the variance of the observation noise. We found that the implicit particle filter is reliable and efficient in all scenarios we considered. The implicit sampling method could improve the accuracy of the traditional variational approach. Moreover, we obtain quantitative measures of the uncertainty of the state estimate ``for free,'' while no information about the uncertainty is easily available using the traditional 4D-Var method only.

Souopgui, I.; Chorin, A. J.; Hussaini, M.

2012-12-01

334

The use of an inert, radioactively labeled microsphere as a measure of particle accumulation (filtration activity) by Mulinia lateralis (Say) and Mytilus edulis L. was evaluated. Bottom sediment plus temperature and salinity of the water were varied to induce changes in filtratio...

335

The use of double base number system (DBNS) multiplier coefficients reduces the complexity and power consumption in the hardware implementation of FIR digital filters. The use of genetic algorithms for optimization of the constituent DBNS multiplier coefficients can further reduce the complexity of the digital filter. This paper presents a novel genetic algorithm based on correlative roulette selection (CRS) for

Sai Mohan Kilambi; Behrouz Nowrouzian

2006-01-01

336

ParticleCall: A particle filter for base calling in next-generation sequencing systems

Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illuminas sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illuminas Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illuminas base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067

2012-01-01

337

Bare bones particle swarm optimization with scale matrix adaptation.

Bare bones particle swarm optimization (BBPSO) is a swarm algorithm that has shown potential for solving single-objective unconstrained optimization problems over continuous search spaces. However, it suffers of the premature convergence problem that means it may get trapped into a local optimum when solving multimodal problems. In order to address this drawback and improve the performance of the BBPSO, we propose a variant of this algorithm, named by us as BBPSO with scale matrix adaptation (SMA), SMA-BBPSO for short reference. In the SMA-BBPSO, the position of a particle is selected from a multivariate t-distribution with a rule for adaptation of its scale matrix. We use the multivariate t-distribution in its hierarchical form, as a scale mixtures of normal distributions. The t -distribution has heavier tails than those of the normal distribution, which increases the ability of the particles to escape from a local optimum. In addition, our approach includes the normal distribution as a particular case. As a consequence, the t -distribution can be applied during the optimization process by maintaining the proper balance between exploration and exploitation. We also propose a simple update rule to adapt the scale matrix associated with a particle. Our strategy consists of adapting the scale matrix of a particle such that the best position found by any particle in its neighborhood is sampled with maximum likelihood in the next iteration. A theoretical analysis was developed to explain how the SMA-BBPSO works, and an empirical study was carried out to evaluate the performance of the proposed algorithm. The experimental results show the suitability of the proposed approach in terms of effectiveness to find good solutions for all benchmark problems investigated. Nonparametric statistical tests indicate that SMA-BBPSO shows a statistically significant improvement compared with other swarm algorithms. PMID:25137686

Campos, Mauro; Krohling, Renato A; Enriquez, Ivan

2014-09-01

338

Representation of Probability Density Functions from Orbit Determination using the Particle Filter

NASA Technical Reports Server (NTRS)

Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

2012-01-01

339

Optimal Pid Tuning for Power System Stabilizers Using Adaptive Particle Swarm Optimization Technique

NASA Astrophysics Data System (ADS)

An application of the intelligent search technique to find optimal parameters of power system stabilizer (PSS) considering proportional-integral-derivative controller (PID) for a single-machine infinite-bus system is presented. Also, an efficient intelligent search technique, adaptive particle swarm optimization (APSO), is engaged to express usefulness of the intelligent search techniques in tuning of the PIDPSS parameters. Improve damping frequency of system is optimized by minimizing an objective function with adaptive particle swarm optimization. At the same operating point, the PIDPSS parameters are also tuned by the Ziegler-Nichols method. The performance of proposed controller compared to the conventional Ziegler-Nichols PID tuning controller. The results reveal superior effectiveness of the proposed APSO based PID controller.

Oonsivilai, Anant; Marungsri, Boonruang

2008-10-01

340

This work aimed to inform the design of ceramic pot filters to be manufactured by the organization Pure Home Water (PHW) in Northern Ghana, and to model the flow through an innovative paraboloid-shaped ceramic pot filter. ...

Miller, Travis Reed

2010-01-01

341

We grew multi-walled carbon nanotubes (MWCNTs) on a glass fiber air filter using thermal chemical vapor deposition (CVD) after the filter was catalytically activated with a spark discharge. After the CNT deposition, filtration and antibacterial tests were performed with the filters. Potassium chloride (KCl) particles (<1?m) were used as the test aerosol particles, and their number concentration was measured using

Jae Hong Park; Ki Young Yoon; Hyungjoo Na; Yang Seon Kim; Jungho Hwang; Jongbaeg Kim; Young Hun Yoon

2011-01-01

342

Signal to noise ratio based filter optimization in triple energy window scatter correction.

Triple energy window (TEW) scatter correction estimates the contribution of scattered photons to the acquisition data by acquiring additional data through two narrow energy windows placed adjoined to the main (photopeak) energy window. The contribution is estimated by linear interpolation and then subtracted. Noise amplification is reduced by filtering both the photopeak scintigram and the scatter estimate. We have studied the filter settings of each filter using a physical phantom filled with a 201Tl-solution resulting in count densities comparable to clinical studies. The performance of order-8 Butterworth filters at different cut-off frequencies (CoFs) were compared based on signal to noise ratios (SNRs). The highest SNRs were obtained when the noisy scatter information was strongly filtered with the CoF less than or equal to 0.07 cycles/pixel (cpp). The best CoF for the filter of the photopeak image is object size dependent; smaller objects require a higher CoF. For objects with a size near the SPECT spatial resolution (approximately 15 mm) the optimal CoF is equal to 0.18 cpp. For larger objects (31.8 mm) the highest SNR was obtained with a CoF equal to 0.13 cpp. A CoF equal to 0.16 cpp is a good compromise for all objects with a diameter equal to the spatial resolution or larger. These results depend on the initial signal to noise ratio of the acquisition data and so on the count density. PMID:10984241

Blokland, K J; Winn, R D; Pauwels, E K

2000-08-01

343

Indoor patient position estimation using particle filtering and wireless body area networks.

Wireless Body Area Network (WBAN) has been recently promoted to monitor the physiological parameters of patient in an unobtrusive and natural way. This paper towards to make advantage of those ongoing wireless communication links between the body sensors to provide estimated position information of patients or particular body area networks, which make daily activity surveillance possible for further analysis. The proposed particle filtering based localization algorithm just picks up the received radio signal strength information from beacons or its neighbors to infer its own pose, which do not require additional hardware or instruments. Theoretical analysis and simulation experiments are presented to examine the performance of location estimating method. PMID:18002445

Ren, Hongliang; Meng, Max Q H; Xu, Lisheng

2007-01-01

344

Context Discovery in Mobile Environments: A Particle Swarm Optimization Approach

We introduce a novel application of Particle Swarm Optimization in the mobile computing domain. We focus on context aware\\u000a applications and investigate the context discovery problem in dynamic environments. Specifically, we investigate those scenarios\\u000a where nodes with context aware applications are trying to (physically) locate up-to-date context, captured by other nodes.\\u000a We establish the concept of context quality (an ageing

Christos Anagnostopoulos; Stathes Hadjiefthymiades

2009-01-01

345

Parallel Particle Swarm Optimization Algorithm Based on Graphic Processing Units

\\u000a A novel parallel approach to implement particle swarm optimization(PSO) algorithm on graphic processing units(GPU) in a personal\\u000a computer is proposed in this chapter. By using the general-purpose computing ability of GPU and under the software platform\\u000a of compute unified device architecture(CUDA) which is developed by NVIDIA, the PSO algorithm can be executed in parallel on\\u000a the GPU. The process of

Ying Tan; You Zhou

346

Dynamic multi-swarm particle swarm optimizer with harmony search

In this paper, the dynamic multi-swarm particle swarm optimizer (DMS-PSO) is improved by hybridizing it with the harmony search (HS) algorithm and the resulting algorithm is abbreviated as DMS-PSO-HS. We present a novel approach to merge the HS algorithm into each sub-swarm of the DMS-PSO. Combining the exploration capabilities of the DMS-PSO and the stochastic exploitation of the HS, the

Shi-Zheng Zhao; Ponnuthurai N. Suganthan; Quan-Ke Pan; Mehmet Fatih Tasgetiren

2011-01-01

347

Optimal estimation of diffusion coefficients from single-particle trajectories

NASA Astrophysics Data System (ADS)

How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.

Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik

2014-02-01

348

Optimizing Magnetite Nanoparticles for Mass Sensitivity in Magnetic Particle Imaging

Purpose: Magnetic particle imaging (MPI), using magnetite nanoparticles (MNPs) as tracer material, shows great promise as a platform for fast tomographic imaging. To date, the magnetic properties of MNPs used in imaging have not been optimized. As nanoparticle magnetism shows strong size dependence, we explore how varying MNP size impacts imaging performance in order to determine optimal MNP characteristics for MPI at any driving field frequency, ?. Methods: Monodisperse MNPs of varying size were synthesized and their magnetic properties characterized. Their MPI response was measured experimentally, at an arbitrarily chosen ? = 250 kHz, using a custom-built MPI transceiver designed to detect the third harmonic of MNP magnetization. Results were interpreted using a model of dynamic MNP magnetization that is based on the Langevin theory of superparamagnetism and accounts for sample size distribution, and size-dependent magnetic relaxation. Results: Our experimental results show clear variation in the MPI signal intensity as a function of MNP size that is in good agreement with modeled results. A maxima in the plot of MPI signal vs. MNP size indicates there is a particular size that is optimal for the chosen frequency of 250 kHz. Conclusions: For MPI at any chosen frequency, there will exist a characteristic particle size that generates maximum signal amplitude. We illustrate this at 250 kHz with particles of 15 nm core diameter.

Ferguson, R. Matthew; Minard, Kevin R.; Khandhar, Amit P.; Krishnan, Kannan M.

2011-03-01

349

Parallel global optimization with the particle swarm algorithm.

Present day engineering optimization problems often impose large computational demands, resulting in long solution times even on a modern high-end processor. To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the particle swarm optimization (PSO) algorithm. Parallel PSO performance was evaluated using two categories of optimization problems possessing multiple local minima-large-scale analytical test problems with computationally cheap function evaluations and medium-scale biomechanical system identification problems with computationally expensive function evaluations. For load-balanced analytical test problems formulated using 128 design variables, speedup was close to ideal and parallel efficiency above 95% for up to 32 nodes on a Beowulf cluster. In contrast, for load-imbalanced biomechanical system identification problems with 12 design variables, speedup plateaued and parallel efficiency decreased almost linearly with increasing number of nodes. The primary factor affecting parallel performance was the synchronization requirement of the parallel algorithm, which dictated that each iteration must wait for completion of the slowest fitness evaluation. When the analytical problems were solved using a fixed number of swarm iterations, a single population of 128 particles produced a better convergence rate than did multiple independent runs performed using sub-populations (8 runs with 16 particles, 4 runs with 32 particles, or 2 runs with 64 particles). These results suggest that (1) parallel PSO exhibits excellent parallel performance under load-balanced conditions, (2) an asynchronous implementation would be valuable for real-life problems subject to load imbalance, and (3) larger population sizes should be considered when multiple processors are available. PMID:17891226

Schutte, J F; Reinbolt, J A; Fregly, B J; Haftka, R T; George, A D

2004-12-01

350

Optimal hydrograph separation filter to evaluate transport routines of hydrological models

NASA Astrophysics Data System (ADS)

Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is an attempt to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to develop a benchmark model that can be used as a geochemical model itself, or to test the performance of process based hydro-geochemical models. The benchmark model quantifies the degree of knowledge that the stream flow time series itself contributes to the hydrochemical analysis. Results of the OHS show that the two HS fractions ("rapid" and "slow") differ according to the geochemical substances which were selected. The OHS parameters were then used to demonstrate how to develop benchmark model for hydro-chemical predictions. Finally, predictions of solute transport from a process-based hydrological model were compared to the proposed benchmark model. Our results indicate that the benchmark model illustrated and quantified the contribution of the modeling procedure better than only using traditional measures like r2 or the Nash-Sutcliffe efficiency.

Rimmer, Alon; Hartmann, Andreas

2014-05-01

351

Initial parameters problem of WNN based on particle swarm optimization

NASA Astrophysics Data System (ADS)

The stock price prediction by the wavelet neural network is about minimizing RMSE by adjusting the parameters of initial values of network, training data percentage, and the threshold value in order to predict the fluctuation of stock price in two weeks. The objective of this dissertation is to reduce the number of parameters to be adjusted for achieving the minimization of RMSE. There are three kinds of parameters of initial value of network: w , t , and d . The optimization of these three parameters will be conducted by the Particle Swarm Optimization method, and comparison will be made with the performance of original program, proving that RMSE can be even less than the one before the optimization. It has also been shown in this dissertation that there is no need for adjusting training data percentage and threshold value for 68% of the stocks when the training data percentage is set at 10% and the threshold value is set at 0.01.

Yang, Chi-I.; Wang, Kaicheng; Chang, Kueifang

2014-04-01

352

Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we me...

353

Parameter estimation using Multiobjective Particle Swarm Optimization (MOPSO)

NASA Astrophysics Data System (ADS)

In the current application, a multiobjective optimization approach is presented for estimation of parameters of hydrologic models. The complexity of hydrologic processes demands efficient and effective tools to fully determine system characteristics. A relatively new optimization algorithm, known as particle swarm optimization (PSO) has been employed here for parameter estimation. The PSO algorithm comes from the family of evolutionary computation techniques and has been applied in various other fields. The approach was initially devised for a single objective function, but in the current application we introduce a multiobjective algorithm, called multiobjective particle swarm optimization (MOPSO), and test it on two different kinds of modeling efforts in hydrology, namely a support vector machine (SVM) model for predicting soil moisture, and a well known conceptual rainfall-runoff (CRR) model, the Sacramento Soil Moisture Accounting (SAC-SMA) model, for estimating streamflow. The algorithm is modified to address multiobjective problems by introducing the Pareto rank concept. The performance of the algorithm is also tested for two test functions.

Gill, M.

2005-12-01

354

In this paper, the authors propose a particle swarm optimization (PSO) for a discrete-time inverse optimal control scheme of a doubly fed induction generator (DFIG). For the inverse optimal scheme, a control Lyapunov function (CLF) is proposed to obtain an inverse optimal control law in order to achieve trajectory tracking. A posteriori, it is established that this control law minimizes a meaningful cost function. The CLFs depend on matrix selection in order to achieve the control objectives; this matrix is determined by two mechanisms: initially, fixed parameters are proposed for this matrix by a trial-and-error method and then by using the PSO algorithm. The inverse optimal control scheme is illustrated via simulations for the DFIG, including the comparison between both mechanisms. PMID:24273145

Ruiz-Cruz, Riemann; Sanchez, Edgar N; Ornelas-Tellez, Fernando; Loukianov, Alexander G; Harley, Ronald G

2013-12-01

355

NASA Astrophysics Data System (ADS)

A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.

Kiani, Maryam; Pourtakdoust, Seid H.

2014-12-01

356

A radiative transfer scheme that considers absorption, scattering, and distribution of light-absorbing elemental carbon (EC) particles collected on a quartz-fiber filter was developed to explain simultaneous filter reflectance and transmittance observations prior to and during...

357

Feature extraction is a critical step in real-time spike sorting after a spike is detected. Features should be informative and noise insensitive for high classification accuracy. This paper describes a new feature extraction method that utilizes a feature denoising filter to improve noise immunity while preserving spike information. Six features were extracted from filtered spikes, including a newly developed feature, and a separability index was applied to select optimal features. Using a set of the three highest-performing features, which includes the new feature, this method can achieve spike classification error as low as 5% for the worst case noise level of 0.2. The computational complexity is only 11% of principle component analysis method and it only costs nine registers per channel. PMID:25570192

Yuning Yang; Boling, Samuel; Eftekhar, Amir; Paraskevopoulou, Sivylla E; Constandinou, Timothy G; Mason, Andrew J

2014-08-01

358

Modeling Filter Bypass: Impact on Filter Efficiency

Current models and test methods for determining filter efficiency ignore filter bypass, the air that circumvents filter media because of gaps around the filter or filter housing. In this paper, we develop a general model to estimate the size-resolved particle removal efficiency, including bypass, of HVAC filters. The model applies the measured pressure drop of the filter to determine the

Matthew Ward; Jeffrey Siegel

359

Diversity enhanced particle swarm optimizer for global optimization of multimodal problems

This paper presents a diversity enhanced particle swarm optimizer (DivEnh-PSO) which uses an external memory to enhance the diversity of the swarm and to discourage premature convergence. The external memory holds selected past solutions with good diversity. Selected past solutions are periodically injected into the swarm. This approach does not require additional function evaluations as past solutions are used to

Shuguang Zhao; Ponnuthurai N. Suganthan

2009-01-01

360

Particle swarm optimization of ascent trajectories of multistage launch vehicles

NASA Astrophysics Data System (ADS)

Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state and costate components, the coast duration, and the upper stage thrust duration. In addition, a simple approach is introduced and successfully applied with the purpose of satisfying exactly the path constraint related to the maximum dynamical pressure in the atmospheric phase. The basic version of the swarming technique, which is used in this research, is extremely simple and easy to program. Nevertheless, the algorithm proves to be capable of yielding the optimal rocket trajectory with a very satisfactory numerical accuracy.

Pontani, Mauro

2014-02-01

361

Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.

This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics. PMID:20064026

Elhossini, Ahmed; Areibi, Shawki; Dony, Robert

2010-01-01

362

Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we measured the deposition of particles ranging from 0.3 to 10?m in an occupied townhouse

Cynthia Howard-Reed; Lance A. Wallace; Steven J. Emmerich

2003-01-01

363

The redox activity of diesel exhaust particles (DEP) collected from a light-duty diesel passenger car engine was examined using the dithiothreitol (DTT) assay. DEP was highly redox-active, causing DTT to decay at a rate of 23-61 pmol min(-1) ?g(-1) of particle used in the assay, which was an order of magnitude higher than ambient coarse and fine particulate matter (PM) collected from downtown Toronto. Only 2-11% of the redox activity was in the water-soluble portion, while the remainder occurred at the black carbon surface. This is in contrast to redox-active secondary organic aerosol constituents, in which upward of 90% of the activity occurs in the water-soluble fraction. The redox activity of DEP is not extractable by moderately polar (methanol) and nonpolar (dichloromethane) organic solvents, and is hypothesized to arise from redox-active moieties contiguous with the black carbon portion of the particles. These measurements illustrate that "Filterable Redox Cycling Activity" may therefore be useful to distinguish black carbon-based oxidative capacity from water-soluble organic-based activity. The difference in chemical environment leading to redox activity highlights the need to further examine the relationship between activity in the DTT assay and toxicology measurements across particles of different origins and composition. PMID:23470039

McWhinney, Robert D; Badali, Kaitlin; Liggio, John; Li, Shao-Meng; Abbatt, Jonathan P D

2013-04-01

364

Discrete Particle Swarm Optimization with Scout Particles for Library Materials Acquisition

Materials acquisition is one of the critical challenges faced by academic libraries. This paper presents an integer programming model of the studied problem by considering how to select materials in order to maximize the average preference and the budget execution rate under some practical restrictions including departmental budget, limitation of the number of materials in each category and each language. To tackle the constrained problem, we propose a discrete particle swarm optimization (DPSO) with scout particles, where each particle, represented as a binary matrix, corresponds to a candidate solution to the problem. An initialization algorithm and a penalty function are designed to cope with the constraints, and the scout particles are employed to enhance the exploration within the solution space. To demonstrate the effectiveness and efficiency of the proposed DPSO, a series of computational experiments are designed and conducted. The results are statistically analyzed, and it is evinced that the proposed DPSO is an effective approach for the studied problem. PMID:24072983

Lin, Bertrand M. T.

2013-01-01

365

Discrete particle swarm optimization with scout particles for library materials acquisition.

Materials acquisition is one of the critical challenges faced by academic libraries. This paper presents an integer programming model of the studied problem by considering how to select materials in order to maximize the average preference and the budget execution rate under some practical restrictions including departmental budget, limitation of the number of materials in each category and each language. To tackle the constrained problem, we propose a discrete particle swarm optimization (DPSO) with scout particles, where each particle, represented as a binary matrix, corresponds to a candidate solution to the problem. An initialization algorithm and a penalty function are designed to cope with the constraints, and the scout particles are employed to enhance the exploration within the solution space. To demonstrate the effectiveness and efficiency of the proposed DPSO, a series of computational experiments are designed and conducted. The results are statistically analyzed, and it is evinced that the proposed DPSO is an effective approach for the studied problem. PMID:24072983

Wu, Yi-Ling; Ho, Tsu-Feng; Shyu, Shyong Jian; Lin, Bertrand M T

2013-01-01

366

Optimized model of oriented-line-target detection using vertical and horizontal filters

NASA Astrophysics Data System (ADS)

A line-element target differing sufficiently in orientation from a background of line elements can be visually detected easily and quickly; orientation thresholds for such detection are lowest when the background elements are all vertical or all horizontal. A simple quantitative model of this performance was constructed from two classes of anisotropic filters, (2) nonlinear point transformation, and (3) estimation of a signal-to-noise ratio based on responses to images with and without a target. A Monte Carlo optimization procedure (simulated annealing) was used to determine the model parameter values required for providing an accurate description of psychophysical data on orientation increment thresholds.

Westland, Stephen; Foster, David H.

1995-08-01

367

NASA Astrophysics Data System (ADS)

A frequency domain implementation of the Optimal Trade-off Maximum Average Correlation Height (OT-MACH) filter has been optimized to classify target vehicles acquired from a Forward Looking Infra Red (FLIR) sensor. The clutter noise does not have a white spectrum and models employing the power spectral density of the background clutter require a predefined threshold. A method of automatically adjusting the noise model in the filter by using the input image statistical information has been introduced. Parameter surfaces for the remaining OT-MACH variables are calculated in order to determine optimal operating conditions for the view independent recognition of vehicles in highly cluttered FLIR imagery.

Alkandri, Ahmad; Gardezi, Akber; Birch, Philip; Young, Rupert; Chatwin, Chris

2011-04-01

368

Hypothyroidism in infants is caused by the insufficient production of hormones by the thyroid gland. Due to stress in the chest cavity as a result of the enlarged liver, their cry signals are unique and can be distinguished from the healthy infant cries. This study investigates the effect of feature selection with Binary Particle Swarm Optimization on the performance of MultiLayer Perceptron classifier in discriminating between the healthy infants and infants with hypothyroidism from their cry signals. The feature extraction process was performed on the Mel Frequency Cepstral coefficients. Performance of the MLP classifier was examined by varying the number of coefficients. It was found that the BPSO enhances the classification accuracy while reducing the computation load of the MLP classifier. The highest classification accuracy of 99.65% was achieved for the MLP classifier, with 36 filter banks, 5 hidden nodes and 11 BPS optimised MFC coefficients. PMID:22254916

Zabidi, A; Khuan, L Y; Mansor, W; Yassin, I M; Sahak, R

2011-01-01

369

OBJECTIVES: Air pollution particulates have been identified as having adverse effects on respiratory health. The present study was undertaken to further clarify the effects of diesel exhaust on bronchoalveolar cells and soluble components in normal healthy subjects. The study was also designed to evaluate whether a ceramic particle trap at the end of the tail pipe, from an idling engine, would reduce indices of airway inflammation. METHODS: The study comprised three exposures in all 10 healthy never smoking subjects; air, diluted diesel exhaust, and diluted diesel exhaust filtered with a ceramic particle trap. The exposures were given for 1 hour in randomised order about 3 weeks apart. The diesel exhaust exposure apperatus has previously been carefully developed and evaluated. Bronchoalveolar lavage was performed 24 hours after exposures and the lavage fluids from the bronchial and bronchoalveolar region were analysed for cells and soluble components. RESULTS: The particle trap reduced the mean steady state number of particles by 50%, but the concentrations of the other measured compounds were almost unchanged. It was found that diesel exhaust caused an increase in neutrophils in airway lavage, together with an adverse influence on the phagocytosis by alveolar macrophages in vitro. Furthermore, the diesel exhaust was found to be able to induce a migration of alveolar macrophages into the airspaces, together with reduction in CD3+CD25+ cells. (CD = cluster of differentiation) The use of the specific ceramic particle trap at the end of the tail pipe was not sufficient to completely abolish these effects when interacting with the exhaust from an idling vehicle. CONCLUSIONS: The current study showed that exposure to diesel exhaust may induce neutrophil and alveolar macrophage recruitment into the airways and suppress alveolar macrophage function. The particle trap did not cause significant reduction of effects induced by diesel exhaust compared with unfiltered diesel exhaust. Further studies are warranted to evaluate more efficient treatment devices to reduce adverse reactions to diesel exhaust in the airways. PMID:10492649

Rudell, B.; Blomberg, A.; Helleday, R.; Ledin, M. C.; Lundback, B.; Stjernberg, N.; Horstedt, P.; Sandstrom, T.

1999-01-01

370

A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

NASA Technical Reports Server (NTRS)

A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

2005-01-01

371

A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants. PMID:24723827

Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

2014-01-01

372

A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants. PMID:24723827

Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

2014-01-01

373

Optimization of particle fluence in micromachining of CR-39

NASA Astrophysics Data System (ADS)

Polyallyl diglycol carbonate (CR-39 etched track detector) material was irradiated with various doses of 2 MeV protons and alpha-particles in order to optimize the fluence for P-beam writing of CR-39. Irradiation were performed at the Institute of Nuclear Research, Debrecen, Hungary and at the National University of Singapore. Post irradiation work has been carried out in Debrecen. The fluence in the irradiated area was sufficiently high that the latent tracks overlapped and the region could be removed collectively by short etching times of the order of less than 1 min. Theoretical calculations based on analytical and Monte Carlo simulations were done in order to calculate the probability of multiple latent track overlap. Optimal particle fluence was found by minimising the fluence and etching time at which collective removal of latent tracks could be observed. Short etching time is required to obtain high resolution microstructures, while low particle fluence is desirable for economical reasons, and also because high fluences increase the risk of unwanted damage (e.g. melting).

Rajta, I.; Baradcs, E.; Bettiol, A. A.; Csige, I.; T?ksi, K.; Budai, L.; Kiss, . Z.

2005-04-01

374

Sizing of particles in industrial processes is of great technical interest and therefore different physical-based techniques have been developed. The objective of this study was to review the characteristics of modern sizing instruments based on a modified fibre-optical spatial filtering technique (SFT). Fibre-optical spatial filtering velocimetry was modified by fibre-optical spot scanning in order to determine simultaneously the size and

Petrak Dieter; Dietrich Stefan; Eckardt Gnter; Khler Michael

2011-01-01

375

NASA Astrophysics Data System (ADS)

We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

Pek?en, Ertan; Yas, Trker; K?yak, Alper

2014-09-01

376

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383

Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

2013-01-01

377

NASA Astrophysics Data System (ADS)

In this work, we propose a new ground moving target indicator (GMTI) radar based ground vehicle tracking method which exploits domain knowledge. Multiple state models are considered and a Monte-Carlo sampling based algorithm is preferred due to the manoeuvring of the ground vehicle and the non-linearity of the GMTI measurement model. Unlike the commonly used algorithms such as the interacting multiple model particle filter (IMMPF) and bootstrap multiple model particle filter (BS-MMPF), we propose a new algorithm integrating the more efficient auxiliary particle filter (APF) into a Bayesian framework. Moreover, since the movement of the ground vehicle is likely to be constrained by the road, this information is taken as the domain knowledge and applied together with the tracking algorithm for improving the tracking performance. Simulations are presented to show the advantages of both the new algorithm and incorporation of the road information by evaluating the root mean square error (RMSE).

Yu, Miao; Liu, Cunjia; Chen, Wen-hua; Chambers, Jonathon

2014-06-01

378

NASA Astrophysics Data System (ADS)

Pt-Pd alloy nanoparticles, as potential catalyst candidates for new-energy resources such as fuel cells and lithium ion batteries owing to their excellent reactivity and selectivity, have aroused growing attention in the past years. Since structure determines physical and chemical properties of nanoparticles, the development of a reliable method for searching the stable structures of Pt-Pd alloy nanoparticles has become of increasing importance to exploring the origination of their properties. In this article, we have employed the particle swarm optimization algorithm to investigate the stable structures of alloy nanoparticles with fixed shape and atomic proportion. An improved discrete particle swarm optimization algorithm has been proposed and the corresponding scheme has been presented. Subsequently, the swap operator and swap sequence have been applied to reduce the probability of premature convergence to the local optima. Furthermore, the parameters of the exchange probability and the 'particle' size have also been considered in this article. Finally, tetrahexahedral Pt-Pd alloy nanoparticles has been used to test the effectiveness of the proposed method. The calculated results verify that the improved particle swarm optimization algorithm has superior convergence and stability compared with the traditional one.

Shao, Gui-Fang; Wang, Ting-Na; Liu, Tun-Dong; Chen, Jun-Ren; Zheng, Ji-Wen; Wen, Yu-Hua

2015-01-01

379

Generalized Particle Swarm Algorithm for HCR Gearing Geometry Optimization

NASA Astrophysics Data System (ADS)

Kuzmanovi?, Sinia; Vere, Miroslav; Rackov, Milan

2012-12-01

380

Panorama parking assistant system with improved particle swarm optimization method

NASA Astrophysics Data System (ADS)

A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.

Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong

2013-10-01

381

This paper presents an overview of the development of a graphical software environment called Papillon DSP OptiStation for the design and constrained min-max optimization of multi-rate FIR and IIR digital filters. The optimization engine is required to handle simultaneously multiple objective functions and multiple arbitrary equality and inequality constraints. Moreover, it is required to handle not only infinite-precision optimization, but

Behrouz Nowrouzian; A. T. G. Fuller; F. Ashrafzadeh

1998-01-01

382

Optimal steering of inertial particles diffusing anisotropically with losses

Exploiting a fluid dynamic formulation for which a probabilistic counterpart might not be available, we extend the theory of Schroedinger bridges to the case of inertial particles with losses and general, possibly singular diffusion coefficient. We find that, as for the case of constant diffusion coefficient matrix, the optimal control law is obtained by solving a system of two p.d.e.'s involving adjoint operators and coupled through their boundary values. In the linear case with quadratic loss function, the system turns into two matrix Riccati equations with coupled split boundary conditions. An alternative formulation of the control problem as a semidefinite programming problem allows computation of suboptimal solutions. This is illustrated in one example of inertial particles subject to a constant rate killing.

Yongxin Chen; Tryphon T. Georgiou; Michele Pavon

2014-10-07

383

Generating optimal initial conditions for smooth particle hydrodynamics (SPH) simulations

We present a new optimal method to set up initial conditions for Smooth Particle Hydrodynamics Simulations, which may also be of interest for N-body simulations. This new method is based on weighted Voronoi tesselations (WVTs) and can meet arbitrarily complex spatial resolution requirements. We conduct a comprehensive review of existing SPH setup methods, and outline their advantages, limitations and drawbacks. A serial version of our WVT setup method is publicly available and we give detailed instruction on how to easily implement the new method on top of an existing parallel SPH code.

Diehl, Steven [Los Alamos National Laboratory; Rockefeller, Gabriel M [Los Alamos National Laboratory; Fryer, Christopher L [Los Alamos National Laboratory

2008-01-01

384

Optimization of nanoparticle core size for magnetic particle imaging

Magnetic Particle Imaging (MPI) is a powerful new diagnostic visualization platform designed for measuring the amount and location of superparamagnetic nanoscale molecular probes (NMPs) in biological tissues. Promising initial results indicate that MPI can be extremely sensitive and fast, with good spatial resolution for imaging human patients or live animals. Here, we present modeling results that show how MPI sensitivity and spatial resolution both depend on NMP-core physical properties, and how MPI performance can be effectively optimized through rational core design. Monodisperse magnetite cores are attractive since they are readily produced with a biocompatible coating and controllable size that facilitates quantitative imaging.

Ferguson, Matthew R.; Minard, Kevin R.; Krishnan, Kannan M.

2009-05-01

385

PMSM Driver Based on Hybrid Particle Swarm Optimization and CMAC

NASA Astrophysics Data System (ADS)

A novel hybrid particle swarm optimization (PSO) and cerebellar model articulation controller (CMAC) is introduced to the permanent magnet synchronous motor (PMSM) driver. PSO can simulate the random learning among the individuals of population and CMAC can simulate the self-learning of an individual. To validate the ability and superiority of the novel algorithm, experiments and comparisons have been done in MATLAB/SIMULINK. Analysis among PSO, hybrid PSO-CMAC and CMAC feed-forward control is also given. The results prove that the electric torque ripple and torque disturbance of the PMSM driver can be reduced by using the hybrid PSO-CMAC algorithm.

Tu, Ji; Cao, Shaozhong

386

NASA Astrophysics Data System (ADS)

The climatological sensitivities of crop yields to changes in mean temperature and precipitation during a period of the growing season were statistically examined. The sensitivity is defined as the change of yield in response to the change of climatic condition in the growth period from sowing to harvesting. The objective crops are maize and soybean, which are being cultivated in United States, Brazil and China as the world major production countries. We collected the yield data of maize and soybean on county level of United States from USDA during a period of 1980-2006, on Municpio level of Brazil during a period of 1990-2006 and on Xin level of China during a period of 1980-2005. While the data on only four provinces in China are used (Heilongjiang, Henan, Liaoning, and Shandong), total production of the four provinces reaches about 40% (maize) and 51% (soybean) to the country total (USDA 1997). We used JRA-25 reanalysis climate data distributed from the Japanese Meteorological Agency during a period of 1980 through 2006 with a resolution of 1.125 in latitude and longitude. To coincide in resolution, the crop yield data were reallocated into the same grids as climate. To eliminate economical and technical effects on yield, we detrended the time series data of yield and climate. We applied a local regression model to conduct the detrend (cubic weighting and M estimator of Tukey's bi-weight function). The time series data on the deviation from the trend were examined with the changes in temperature and precipitation for each grid using the particle filter. The particle filter used here is based on self-organizing state-space model. As a result, in the northern hemisphere, positive sensitivity, i.e. increase in temperature shifts the crop yield positively, is generally found especially in higher latitude, while negative sensitivity is found in the lower latitude. The neutral sensitivity is found in the regions where the mean temperature during growing season for maize and soybean is around 19.4C and 20.4C, respectively. The sensitivity to precipitation is generally positive for all the regions and crops. In Brazil, the sensitivity to temperature is not clear because variation in temperature is very small. In the southern hemisphere, the sensitivity to precipitation is larger than temperature. Moreover, it is shown that the sensitivities of crops to temperature and precipitation have significantly changed with time. The particle filter enable us to catch up the historical time changes on crop yields in response to environmental changes on country scale and predict the near future.

Yokozawa, M.; Sakurai, G.; Iizumi, T.

2010-12-01

387

We consider design optimization of passively mode-locked two-section semiconductor lasers that incorporate intracavity grating spectral filters. Our goal is to develop a method for finding the optimal wavelength location for the filter in order to maximize the region of stable mode-locking as a function of drive current and reverse bias in the absorber section. In order to account for material dispersion in the two sections of the laser, we use analytic approximations for the gain and absorption as a function of carrier density and frequency. Fits to measured gain and absorption curves then provide inputs for numerical simulations based on a large signal accurate delay-differential model of the mode-locked laser. We show how a unique set of model parameters for each value of the drive current and reverse bias voltage can be selected based on the variation of the net gain along branches of steady-state solutions of the model. We demonstrate the validity of this approach by demonstrating qualitative agreement b...

O'Callaghan, Finbarr; O'Brien, Stephen

2014-01-01

388

Optimal hydrograph separation filter to evaluate transport routines of hydrological models

NASA Astrophysics Data System (ADS)

Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to benchmark the performance of process-based hydro-geochemical (HG) models. The new HG routine can be used to quantify the degree of knowledge that the stream flow time series itself contributes to the HG analysis, using newly developed benchmark geochemistry efficiency (BGE). Results of the OHS show that the two HS fractions (rapid and slow) differ according to the HG substances which were selected. The BFImax parameter (long-term ratio of baseflow to total streamflow) ranged from 0.26 to 0.94 for SO4-2 and total suspended solids, TSS, respectively. Then, predictions of SO4-2 transport from a process-based hydrological model were benchmarked with the proposed HG routine, in order to evaluate the significance of the HG routines in the process-based model. This comparison provides valuable quality test that would not be obvious when using the traditional measures like r2 or the NSE (Nash-Sutcliffe efficiency). The process-based model resulted in r2 = 0.65 and NSE = 0.65, while the benchmark routine results were slightly lower with r2 = 0.61 and NSE = 0.58. However, the comparison between the two model resulted in obvious advantage for the process-based model with BGE = 0.15.

Rimmer, Alon; Hartmann, Andreas

2014-06-01

389

Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. The cWIA ability to establish a mechanistic link between coronary haemodynamics measurements and the underlying pathophysiology has been widely demonstrated. Moreover, the prognostic value of a cWIA-derived metric has been recently proved. However, the clinical application of cWIA has been hindered due to the strong dependence on the practitioners, mainly ascribable to the cWIA-derived indices sensitivity to the pre-processing parameters. Specifically, as recently demonstrated, the cWIA-derived metrics are strongly sensitive to the Savitzky-Golay (S-G) filter, typically used to smooth the acquired traces. This is mainly due to the inability of the S-G filter to deal with the different timescale features present in the measured waveforms. Therefore, we propose to apply an adaptive S-G algorithm that automatically selects pointwise the optimal filter parameters. The newly proposed algorithm accuracy is assessed against a cWIA gold standard, provided by a newly developed in-silico cWIA modelling framework, when physiological noise is added to the simulated traces. The adaptive S-G algorithm, when used to automatically select the polynomial degree of the S-G filter, provides satisfactory results with ? 10% error for all the metrics through all the levels of noise tested. Therefore, the newly proposed method makes cWIA fully automatic and independent from the practitioners, opening the possibility to multi-centre trials. PMID:25571129

Rivolo, Simone; Nagel, Eike; Smith, Nicolas P; Lee, Jack

2014-08-01

390

Detecting disease outbreaks using a combined Bayesian network and particle filter approach.

Evaluating whether a disease outbreak has occurred based on limited information in medical records is inherently a probabilistic problem. This paper presents a methodology for consistently analysing the probability that a disease targeted by a surveillance system has appeared in the population, based on the medical records of the individuals within the target population, using a Bayesian network. To enable the system to produce a probability density function of the fraction of the population that is infected, a mathematically consistent conjoining of Bayesian networks and particle filters is used. This approach is tested against the default algorithm of ESSENCE Desktop Edition (which adaptively uses Poisson, exponentially weighted moving average and linear regression techniques as needed), and is shown, for the simulated test data used, to give significantly shorter detection times at false alarm rates of practical interest. This methodology shows promise to greatly improve detection times for outbreaks in populations where timely electronic health records are available for data-mining. PMID:25637764

Dawson, Peter; Gailis, Ralph; Meehan, Alaster

2015-04-01

391

Evaluation of a particle swarm algorithm for biomechanical optimization.

Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently-developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm's global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms--a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units. PMID:16060353

Schutte, Jaco F; Koh, Byung-Il; Reinbolt, Jeffrey A; Haftka, Raphael T; George, Alan D; Fregly, Benjamin J

2005-06-01

392

NASA Astrophysics Data System (ADS)

Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.

Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.

2012-04-01

393

Optimization of pre- and post-filters in the presence of near and far-end crosstalk

Full-duplex data communications are considered over a linear, time-invariant, multi-input\\/multi-output channel. For both the continuous- and discrete-time cases, optimal multi-input\\/multi-output transmitter and receiver filters are derived using the minimum mean-square error (MSE) criterion, with a power constraint on the transmitted signal, in the presence of both near- and far-end crosstalk. The discrete-time problem is solved for two different filter models:

Pedro Crespo; Michael L. Honig; Kenneth Steiglitz

1989-01-01

394

Design and Optimization of Dual Band Microstrip Antenna Using Particle Swarm Optimization Technique

Dual-frequency operation of antenna has become a necessity for many applications in recent wireless communication systems,\\u000a such as GPS, GSM services each operating at two different frequency bands. A new technique to achieve dual band operation\\u000a from different types of microstrip antennas is presented here. An evolutionary design process using a particle swarm optimization\\u000a (PSO) algorithm in conjunction with the

Santanu Kumar Behera; Y. Choukiker

2010-01-01

395

Particle Swarm Optimization in Comparison with Classical Optimization for GPS Network Design

NASA Astrophysics Data System (ADS)

The Global Positioning System (GPS) is increasingly coming into use to establish geodetic networks. In order to meet the established aims of a geodetic network, it has to be optimized, depending on design criteria. Optimization of a GPS network can be carried out by selecting baseline vectors from all of the probable baseline vectors that can be measured in a GPS network. Classically, a GPS network can be optimized using the trial and error method or analytical methods such as linear or nonlinear programming, or in some cases by generalized or iterative generalized inverses. Optimization problems may also be solved by intelligent optimization techniques such as Genetic Algorithms (GAs), Simulated Annealing (SA) and Particle Swarm Optimization (PSO) algorithms. The purpose of the present paper is to show how the PSO can be used to design a GPS network. Then, the efficiency and the applicability of this method are demonstrated with an example of GPS network which has been solved previously using a classical method. Our example shows that the PSO is effective, improving efficiency by 19.2% over the classical method.

Doma, M. I.

2013-12-01

396

Application Of Particle Swarm To Multiobjective Optimization Jacqueline Moore Richard Chapman

Application Of Particle Swarm To Multiobjective Optimization Jacqueline Moore Richard Chapman problems (Goldberg 1989). Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995) is a new type Optimization PSO (Kennedy 1997) is based on the hypothesis that members of a population (swarm) can profit from

Coello, Carlos A. Coello

397

Applying Particle Swarm Optimization to Quality-of-Service-driven Web Service Composition

method in particular performs very well in service-oriented environments. particle swarm optimization, composition, delivery, service-to-service collaboration, monitoring, optimization, as well as managementApplying Particle Swarm Optimization to Quality-of-Service-driven Web Service Composition Simone A

Ludwig, Simone

398

This experiment was designed to study the release of cellulose acetate fibers, charcoal, and other particles from cigarettes with charcoal and activated charcoal/resin filters. For the first time in such studies, efforts were made to identify the particles that were eluted using other analytical techniques in addition to light microscopy. Other corrective measures were also implemented. During the studies it was found that trimming of larger filters to fit smaller filter housings introduced cellulose acetate-like particles from the fibers of the filter material. Special, custom made-to-fit filters were used instead. Tools such as forceps that were used to retrieve filters from their housings were also found to introduce fragments onto the filters. It is believed that introduction of such debris may have accounted for the very large number of cellulose acetate and charcoal particles that had been reported in the literature. Use of computerized particle-counting microscopes appeared to result in excessive number of particles. This could be because the filter or smoke pads used for such work do not have the flat and level surfaces ideal for computerized particle-counting microscopes. At the high magnifications that the pads were viewed for particles, constant focusing of the microscope would be essential. It was also found that determination of total particles by using extrapolation of particle count by grid population usually gave extremely high particle counts compared to the actual number of particles present. This could be because particle distributions during smoking are not uniform. Lastly, a less complex estimation of the thickness of the particles was adopted. This and the use of a simple mathematical conversion coupled with the Cox equation were utilized to assess the aerodynamic diameters of the particles. Our findings showed that compared to numbers quoted in the literature, only a small amount of charcoal, cellulose acetate shards, and other particles are released. It was also shown that those particles would have a low likelihood of reaching the lung. PMID:16036754

Agyei-Aye, K; Appleton, S; Rogers, R A; Taylor, C R

2004-08-01

399

Particle Swarm Optimization with Scale-Free Interactions

The particle swarm optimization (PSO) algorithm, in which individuals collaborate with their interacted neighbors like bird flocking to search for the optima, has been successfully applied in a wide range of fields pertaining to searching and convergence. Here we employ the scale-free network to represent the inter-individual interactions in the population, named SF-PSO. In contrast to the traditional PSO with fully-connected topology or regular topology, the scale-free topology used in SF-PSO incorporates the diversity of individuals in searching and information dissemination ability, leading to a quite different optimization process. Systematic results with respect to several standard test functions demonstrate that SF-PSO gives rise to a better balance between the convergence speed and the optimum quality, accounting for its much better performance than that of the traditional PSO algorithms. We further explore the dynamical searching process microscopically, finding that the cooperation of hub nodes and non-hub nodes play a crucial role in optimizing the convergence process. Our work may have implications in computational intelligence and complex networks. PMID:24859007

Liu, Chen; Du, Wen-Bo; Wang, Wen-Xu

2014-01-01

400

A computational, three-dimensional approach to investigate the behavior ofdiesel soot particles in the micro-channels ofa wall-flow, porous-ceramic particulate filter is presented. Particle size examined is in the PM2.5 range. The flow field is simulated with a finite- volume Navier-Stokes solver and the Ergun equation is used to model the porous material. The permeability coefficients were obtained by fitting experimental data.

Fabio Sbrizzai; Paolo Faraldi; Alfredo Soldati

401

Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

NASA Technical Reports Server (NTRS)

An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

Simon, Donald L.; Garg, Sanjay

2011-01-01

402

NASA Astrophysics Data System (ADS)

The presented filter-based optical method for determination of soot (light absorbing carbon or Black Carbon, BC) can be implemented in the field under primitive conditions and at low cost. This enables researchers with small economical means to perform monitoring at remote locations, especially in the Asia where it is much needed. One concern when applying filter-based optical measurements of BC is that they suffer from systematic errors due to the light scattering of non-absorbing particles co-deposited on the filter, such as inorganic salts and mineral dust. In addition to an optical correction of the non-absorbing material this study provides a protocol for correction of light scattering based on the chemical quantification of the material, which is a novelty. A newly designed photometer was implemented to measure light transmission on particle accumulating filters, which includes an additional sensor recording backscattered light. The choice of polycarbonate membrane filters avoided high chemical blank values and reduced errors associated with length of the light path through the filter. Two protocols for corrections were applied to aerosol samples collected at the Maldives Climate Observatory Hanimaadhoo during episodes with either continentally influenced air from the Indian/Arabian subcontinents (winter season) or pristine air from the Southern Indian Ocean (summer monsoon). The two ways of correction (optical and chemical) lowered the particle light absorption of BC by 63 to 61 %, respectively, for data from the Arabian Sea sourced group, resulting in median BC absorption coefficients of 4.2 and 3.5 Mm-1. Corresponding values for the South Indian Ocean data were 69 and 97 % (0.38 and 0.02 Mm-1). A comparison with other studies in the area indicated an overestimation of their BC levels, by up to two orders of magnitude. This raises the necessity for chemical correction protocols on optical filter-based determinations of BC, before even the sign on the radiative forcing based on their effects can be assessed.

Engstrm, J. E.; Leck, C.

2011-08-01

403

The main purpose of research is to determine the influence by the small dispersive coal dust particles of the different fractional consistence on the technical characteristics of the vertical iodine air filter at nuclear power plant. The research on the transport properties of the small dispersive coal dust particles in the granular filtering medium of absorber in the vertical iodine air filter is completed in the case, when the modeled aerodynamic conditions are similar to the real aerodynamic conditions. It is shown that the appearance of the different fractional consistence of small dispersive coal dust particles with the decreasing dimensions down to the micro and nano sizes at the action of the air dust aerosol stream normally results in a significant change of distribution of the small dispersive coal dust particles masses in the granular filtering medium of an absorber in the vertical iodine air filter, changing the vertical iodine air filter aerodynamic characteristics. The precise characterization of...

Neklyudov, I M; Fedorova, L I; Poltinin, P Ya

2013-01-01

404

An Accelerated Particle Swarm Optimization Algorithm on Parametric Optimization of WEDM of Die-Steel

NASA Astrophysics Data System (ADS)

This study employed Accelerated Particle Swarm Optimization (APSO) algorithm to optimize the machining parameters that lead to a maximum Material Removal Rate (MRR), minimum surface roughness and minimum kerf width values for Wire Electrical Discharge Machining (WEDM) of AISI D3 die-steel. Four machining parameters that are optimized using APSO algorithm include Pulse on-time, Pulse off-time, Gap voltage, Wire feed. The machining parameters are evaluated by Taguchi's L9 Orthogonal Array (OA). Experiments are conducted on a CNC WEDM and output responses such as material removal rate, surface roughness and kerf width are determined. The empirical relationship between control factors and output responses are established by using linear regression models using Minitab software. Finally, APSO algorithm, a nature inspired metaheuristic technique, is used to optimize the WEDM machining parameters for higher material removal rate and lower kerf width with surface roughness as constraint. The confirmation experiments carried out with the optimum conditions show that the proposed algorithm was found to be potential in finding numerous optimal input machining parameters which can fulfill wide requirements of a process engineer working in WEDM industry.

Muthukumar, V.; Suresh Babu, A.; Venkatasamy, R.; Senthil Kumar, N.

2015-01-01

405

Particle Swarm Optimization Approach in a Consignment Inventory System

NASA Astrophysics Data System (ADS)

Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.

Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman

2009-09-01

406

Order-2 Stability Analysis of Particle Swarm Optimization.

Several stability analyses and stable regions of particle swarm optimization (PSO) have been proposed before. The assumption of stagnation and different definitions of stability are adopted in these analyses. In this paper, the order-2 stability of PSO is analyzed based on a weak stagnation assumption. A new definition of stability is proposed and an order-2 stable region is obtained. Several existing stable analyses for canonical PSO are compared, especially their definitions of stability and the corresponding stable regions. It is shown that the classical stagnation assumption is too strict and not necessary. Moreover, among all these definitions of stability, it is shown that our definition requires the weakest conditions, and additional conditions bring no benefit. Finally, numerical experiments are reported to show that the obtained stable region is meaningful. A new parameter combination of PSO is also shown to be good, even better than some known best parameter combinations. PMID:24738856

Liu, Qunfeng

2014-04-16

407

NASA Astrophysics Data System (ADS)

Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

2012-05-01

408

We grew multi-walled carbon nanotubes (MWCNTs) on a glass fiber air filter using thermal chemical vapor deposition (CVD) after the filter was catalytically activated with a spark discharge. After the CNT deposition, filtration and antibacterial tests were performed with the filters. Potassium chloride (KCl) particles (<1 ?m) were used as the test aerosol particles, and their number concentration was measured using a scanning mobility particle sizer. Antibacterial tests were performed using the colony counting method, and Escherichia coli (E. coli) was used as the test bacteria. The results showed that the CNT deposition increased the filtration efficiency of nano and submicron-sized particles, but did not increase the pressure drop across the filter. When a pristine glass fiber filter that had no CNTs was used, the particle filtration efficiencies at particle sizes under 30 nm and near 500 nm were 48.5% and 46.8%, respectively. However, the efficiencies increased to 64.3% and 60.2%, respectively, when the CNT-deposited filter was used. The reduction in the number of viable cells was determined by counting the colony forming units (CFU) of each test filter after contact with the cells. The pristine glass fiber filter was used as a control, and 83.7% of the E. coli were inactivated on the CNT-deposited filter. PMID:21767869

Park, Jae Hong; Yoon, Ki Young; Na, Hyungjoo; Kim, Yang Seon; Hwang, Jungho; Kim, Jongbaeg; Yoon, Young Hun

2011-09-01

409

Neurofilaments are long flexible cytoplasmic protein polymers that are transported rapidly but intermittently along the axonal processes of nerve cells. Current methods for studying this movement involve manual tracking of fluorescently tagged neurofilament polymers in videos acquired by time-lapse fluorescence microscopy. Here, we describe an automated tracking method that uses particle filtering to implement a recursive Bayesian estimation of the filament location in successive frames of video sequences. To increase the efficiency of this approach, we take advantage of the fact that neurofilament movement is confined within the boundaries of the axon. We use piecewise cubic spline interpolation to model the path of the axon and then we use this model to limit both the orientation and location of the neurofilament in the particle tracking algorithm. Based on these two spatial constraints, we develop a prior dynamic state model that generates significantly fewer particles than generic particle filtering, and we select an adequate observation model to produce a robust tracking method. We demonstrate the efficacy and efficiency of our method by performing tracking experiments on real time-lapse image sequences of neurofilament movement, and we show that the method performs well compared to manual tracking by an experienced user. This spatially constrained particle filtering approach should also be applicable to the movement of other axonally transported cargoes. PMID:21859599

Yuan, Liang; Zhu, Junda; Wang, Lina; Brown, A.

2012-01-01

410

Crystal structure prediction via particle-swarm optimization

NASA Astrophysics Data System (ADS)

We have developed a method for crystal structure prediction from scratch through particle-swarm optimization (PSO) algorithm within the evolutionary scheme. PSO technique is different with the genetic algorithm and has apparently avoided the use of evolution operators (e.g., crossover and mutation). The approach is based on an efficient global minimization of free-energy surfaces merging total-energy calculations via PSO technique and requires only chemical compositions for a given compound to predict stable or metastable structures at given external conditions (e.g., pressure). A particularly devised geometrical structure parameter which allows the elimination of similar structures during structure evolution was implemented to enhance the structure search efficiency. The application of designed variable unit-cell size technique has greatly reduced the computational cost. Moreover, the symmetry constraint imposed in the structure generation enables the realization of diverse structures, leads to significantly reduced search space and optimization variables, and thus fastens the global structure convergence. The PSO algorithm has been successfully applied to the prediction of many known systems (e.g., elemental, binary, and ternary compounds) with various chemical-bonding environments (e.g., metallic, ionic, and covalent bonding). The high success rate demonstrates the reliability of this methodology and illustrates the promise of PSO as a major technique on crystal structure determination.

Wang, Yanchao; Lv, Jian; Zhu, Li; Ma, Yanming

2010-09-01

411

Binary Particle Swarm Optimization based Biclustering of Web Usage Data

NASA Astrophysics Data System (ADS)

Web mining is the nontrivial process to discover valid, novel, potentially useful knowledge from web data using the data mining techniques or methods. It may give information that is useful for improving the services offered by web portals and information access and retrieval tools. With the rapid development of biclustering, more researchers have applied the biclustering technique to different fields in recent years. When biclustering approach is applied to the web usage data it automatically captures the hidden browsing patterns from it in the form of biclusters. In this work, swarm intelligent technique is combined with biclustering approach to propose an algorithm called Binary Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The main objective of this algorithm is to retrieve the global optimal bicluster from the web usage data. These biclusters contain relationships between web users and web pages which are useful for the E-Commerce applications like web advertising and marketing. Experiments are conducted on real dataset to prove the efficiency of the proposed algorithms.

Rathipriya, R.; Thangavel, K.; Bagyamani, J.

2011-07-01

412

A theoretical model has been used to calculate the filtration efficiency that would be indicated by the photometer for challenge aerosols of different size distributions and HEPA filters with different efficiencies as functions of particle size. The model compares the calculated overall efficiency indicated by the photometer with efficiencies calculated with respect to particle number and mass. This calculation assumes three aerosol distributions previously measured at the Filter Test Facilities and four different filtration efficiency versus size curves. The differences in efficiency measured by the QA test procedure and the efficiencies with respect to aerosol mass and number have been calculated for a range of different size particles. The results of these calculations are discussed.

Tillery, M.I.; Salzman, G.C.; Ettinger, H.J.

1982-01-01

413

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS--II: EXPRESS BRIEFS, VOL. 51, NO. 3, MARCH 2004 105 Continuous-Time Filter Design Optimized for Reduced Die Area Charles Myers, Student Member, IEEE, Brandon for distributing capacitor and resistor area to optimally reduce die area in a given continuous-time filter design

Moon, Un-Ku

414

NASA Astrophysics Data System (ADS)

Correlation filters with three transmittance levels (+1, 0, and -1) are of interest in optical pattern recognition because they can be implemented on available spatial light modulators and because the zero level allows us to include a region of support (ROS). The ROS can provide additional control over the filter's noise tolerance and peak sharpness. A new algorithm based on optimizing a compromise average performance measure (CAPM) is proposed for designing three-level composite filters. The performance of this algorithm is compared to other three-level composite filter designs using a common image database and using figures of merit such as the Fisher ratio, error rate, and light efficiency. It is shown that the CAPM algorithm yields better results.

Hendrix, Charles D.; Vijaya Kumar, B. V. K.

1994-06-01

415

It is well known that canonical signed digit (CSD) multiplier coefficients reduce the complexity and power consumption requirements in the hardware implementation of FIR digital filters. Optimization of the constituent CSD multiplier coefficients using genetic algorithms can further reduce this complexity by constantly evolving from generation to generation based on the minimization of an objective fitness function modeled on the

Sai Mohan Kilambi; Behrouz Nowrouzian

2006-01-01

416

Particle swarm optimization (PSO) and differential evolution (DE) are both efficient and powerful population-based stochastic search techniques for solving optimization problems, which have been widely applied in many scientific and engineering fields. Unfortunately, both of them can easily fly into local optima and lack the ability of jumping out of local optima. A novel adaptive hybrid algorithm based on PSO and DE (HPSO-DE) is formulated by developing a balanced parameter between PSO and DE. Adaptive mutation is carried out on current population when the population clusters around local optima. The HPSO-DE enjoys the advantages of PSO and DE and maintains diversity of the population. Compared with PSO, DE, and their variants, the performance of HPSO-DE is competitive. The balanced parameter sensitivity is discussed in detail. PMID:24688370

Yu, Xiaobing; Cao, Jie; Shan, Haiyan; Zhu, Li; Guo, Jun

2014-01-01

417

NASA Astrophysics Data System (ADS)

This paper describes a methodology for determination of optimum failure rate and repair time for each section of a radial distribution system. An objective function in terms of reliability indices and their target values is selected. These indices depend mainly on failure rate and repair time of a section present in a distribution network. A cost is associated with the modification of failure rate and repair time. Hence the objective function is optimized subject to failure rate and repair time of each section of the distribution network considering the total budget allocated to achieve the task. The problem has been solved using differential evolution and bare bones particle swarm optimization. The algorithm has been implemented on a sample radial distribution system.

Kela, K. B.; Arya, L. D.

2014-09-01

418

A Bayesian Interpretation of the Particle Swarm Optimization and Its Kernel Extension

Particle swarm optimization is a popular method for solving difficult optimization problems. There have been attempts to formulate the method in formal probabilistic or stochastic terms (e.g. bare bones particle swarm) with the aim to achieve more generality and explain the practical behavior of the method. Here we present a Bayesian interpretation of the particle swarm optimization. This interpretation provides a formal framework for incorporation of prior knowledge about the problem that is being solved. Furthermore, it also allows to extend the particle optimization method through the use of kernel functions that represent the intermediary transformation of the data into a different space where the optimization problem is expected to be easier to be resolvedsuch transformation can be seen as a form of prior knowledge about the nature of the optimization problem. We derive from the general Bayesian formulation the commonly used particle swarm methods as particular cases. PMID:23144937

Andras, Peter

2012-01-01

419

Diesel particle filter and fuel effects on heavy-duty diesel engine emissions.

The impacts of biodiesel and a continuously regenerated (catalyzed) diesel particle filter (DPF) on the emissions of volatile unburned hydrocarbons, carbonyls, and particle associated polycyclic aromatic hydrocarbons (PAH) and nitro-PAH, were investigated. Experiments were conducted on a 5.9 L Cummins ISB, heavy-duty diesel engine using certification ultra-low-sulfur diesel (ULSD, S ? 15 ppm), soy biodiesel (B100), and a 20% blend thereof (B20). Against the ULSD baseline, B20 and B100 reduced engine-out emissions of measured unburned volatile hydrocarbons and PM associated PAH and nitro-PAH by significant percentages (40% or more for B20 and higher percentage for B100). However, emissions of benzene were unaffected by the presence of biodiesel and emissions of naphthalene actually increased for B100. This suggests that the unsaturated FAME in soy-biodiesel can react to form aromatic rings in the diesel combustion environment. Methyl acrylate and methyl 3-butanoate were observed as significant species in the exhaust for B20 and B100 and may serve as markers of the presence of biodiesel in the fuel. The DPF was highly effective at converting gaseous hydrocarbons and PM associated PAH and total nitro-PAH. However, conversion of 1-nitropyrene by the DPF was less than 50% for all fuels. Blending of biodiesel caused a slight reduction in engine-out emissions of acrolein, but otherwise had little effect on carbonyl emissions. The DPF was highly effective for conversion of carbonyls, with the exception of formaldehyde. Formaldehyde emissions were increased by the DPF for ULSD and B20. PMID:20886845

Ratcliff, Matthew A; Dane, A John; Williams, Aaron; Ireland, John; Luecke, Jon; McCormick, Robert L; Voorhees, Kent J

2010-11-01

420

IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING

This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches. PMID:21132112

Bayard, David S.; Schumitzky, Alan

2009-01-01

421

Usefulness of Nonlinear Interpolation and Particle Filter in Zigbee Indoor Positioning

NASA Astrophysics Data System (ADS)

The key to fingerprint positioning algorithm is establishing effective fingerprint information database based on different reference nodes of received signal strength indicator (RSSI). Traditional method is to set the location area calibration multiple information sampling points, and collection of a large number sample data what is very time consuming. With Zigbee sensor networks as platform, considering the influence of positioning signal interference, we proposed an improved algorithm of getting virtual database based on polynomial interpolation, while the pre-estimated result was disposed by particle filter. Experimental result shows that this method can generate a quick, simple fine-grained localization information database, and improve the positioning accuracy at the same time. Kluczem do algorytmu pozycjonowania wykorzystuj?cego metod? fi ngerprinting jest ustanowienie skutecznej bazy danych na podstawie informacji z radiowych nadajnikw referencyjnych przy wykorzystaniu wska?nika mocy odbieranego sygna?u (RSSI). Tradycyjna metoda oparta jest na przeprowadzeniu kalibracji obszaru lokalizacji na podstawie wielu punktw pomiarowych i otrzymaniu du?ej liczby prbek, co jest bardzo czasoch?onne.

Zhang, Xiang; Wu, Helei; Uradzi?ski, Marcin

2014-12-01

422

Particle Filters and Occlusion Handling for Rigid 2D-3D Pose Tracking

In this paper, we address the problem of 2D-3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the objects pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios. PMID:24058277

Lee, Jehoon; Sandhu, Romeil; Tannenbaum, Allen

2013-01-01

423

The performance and nitrification properties of three BAFs, with ceramic, zeolite and carbonate media, respectively, were investigated to evaluate the feasibility of employing these materials as biological aerated filter media. All three BAFs shown a promising COD and SS removal performance, while influent pH was 6.5-8.1, air-liquid ratio was 5:1 and HRT was 1.25-2.5 h, respectively. Ammonia removal in BAFs was inhibited when organic and ammonia nitrogen loading were increased, but promoted effectively with the increase pH value. Zeolite and carbonate were more suitable for nitrification than ceramic particle when influent pH below 6.5. It is feasible to employ these media in BAF and adequate bed volume has to be supplied to satisfy the requirement of removal COD, SS and ammonia nitrogen simultaneously in a biofilter. The carbonate with a strong buffer capacity is more suitable to treat the wastewater with variable or lower pH. PMID:20483593

Qiu, Liping; Zhang, Shoubin; Wang, Guangwei; Du, Mao'an

2010-10-01

424

IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING.

This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches. PMID:21132112

Bayard, David S; Schumitzky, Alan

2010-03-01

425

Air quality is one of the areas in Europe where a series of EU Directives have been published with the aim of achieving improved long-term and harmonised air quality objectives across the European Union. This paper describes the production of a certified reference material, aiming to support QA/QC programmes of analytical laboratories in the framework of the air quality monitoring activities. The certified values are the As, Cd, Ni and Pb masses in PM10 particles deposited on quartz filters (CRM SL-MR-2-PSF-01). All the steps of the certification, i.e. the material characterisation, homogeneity and stability evaluation and uncertainty calculation, were performed according to the ISO guide 35 guidelines. The certification was conducted using the characterisation by a single method approach based on isotope dilution for cadmium, nickel, and lead and gravimetric standard addition calibration for arsenic associated with inductively coupled mass spectrometry (ICP-MS). The amounts of the four elements are in the range of the target values regulated by EU Directives. PMID:25260410

Oster, Caroline; Labarraque, Guillaume; Fisicaro, Paola

2015-04-01

426

Aluminum hydroxide fibers approximately 2 nanometers in diameter and with surface areas ranging from 200 to 650 m.sup.2\\/g have been found to be highly electropositive. When dispersed in water they are able to attach to and retain electronegative particles. When combined into a composite filter with other fibers or particles they can filter bacteria and nano size particulates such as

Frederick Tepper; Leonid Kaledin

2009-01-01

427

A novel emergency evacuation model based on a modified Particle Swarm Optimization was presented. The Linear Weight Decreasing Particle Swarm Optimization was introduced to simulate individualpsilas movement. Compared with social force model and CA model, the new model well performed evacuation result. A prototype system of emergency evacuation simulation was implemented based on Geographic Information System (GIS) application framework. This

Wang Cheng; Yang Bo; Li Lijun; Huang Hua

2008-01-01

428

NASA Astrophysics Data System (ADS)

A hybrid data assimilation scheme designed for operational assimilation of satellite sea surface temperatures (SST) into an ocean model has been developed and validated against in-situ observations. The scheme consists of an optimal interpolation (OI) part and a greatly simplified Kalman filter (KF) part. The OI is performed only in the longitudinal and latitudinal directions. A climatological field is used as a background field for the interpolation. It is constructed by fitting daily averages of satellite SST to the annual mean, annual, and semiannual harmonics in a 20 km by 20 km grid. The background error covariance is approximated by a spatially varying two-dimensional exponential covariance model. The parameters of the covariance model are fitted to the deviations of the satellite data from the background field using data from a full year. The simplified KF uses ocean model forecasts as a background field. It is based on the assumption that it is possible to neglect horizontal SST covariances in the filter and that the typical time scale for vertical mixing in the mixed layer is much shorter than the average time between observations. We therefore assume that the error variance in a column of water is evenly spread out throughout the mixed layer. The result of these simplifications is a computationally very efficient KF. A one year validation of the scheme is performed for year 2001 using an operational eddy resolving ocean model covering the North Sea and the Baltic Sea. It is found that assimilation of sea surface temperature data reduces the model root mean square error from 1.13 C to 0.70 C. The hybrid scheme is found to reduce the root mean square error slightly more than the simplified KF without OI to 0.66 C. The inclusion of spatially varying satellite error variances does not improve the performance of the scheme significantly.

Larsen, J.; Hyer, J. L.; She, J.

2007-03-01

429

Fishing for Data: Using Particle Swarm Optimization to Search Data

NASA Astrophysics Data System (ADS)

As the size of data and model sets continue to increase, more efficient ways are needed to sift through the available information. We present a computational method which will efficiently search large parameter spaces to either map the space or find individual data/models of interest. Particle swarm optimization (PSO) is a subclass of artificial life computer algorithms. The PSO algorithm attempts to leverage "swarm intelligence against finding optimal solutions to a problem. This system is often based on a biological model of a swarm (e.g. schooling fish). These biological models are broken down into a few simple rules which govern the behavior of the system. "Agents (e.g. fish) are introduced and the agents, following the rules, search out solutions much like a fish would seek out food. We have made extensive modifications to the standard PSO model which increase its efficiency as-well-as adding the capacity to map a parameter space and find multiple solutions. Our modified PSO is ideally suited to search and map large sets of data/models which are degenerate or to search through data/models which are too numerous to analyze by hand. One example of this would include radiative transfer models, which are inherently degenerate. Applying the PSO algorithm will allow the degeneracy space to be mapped and thus better determine limits on dust shell parameters. Another example is searching through legacy data from a survey for hints of Polycyclic Aromatic Hydrocarbon emission. What might have once taken years of searching (and many frustrated graduate students) can now be relegated to the task of a computer which will work day and night for only the cost of electricity. We hope this algorithm will allow fellow astronomers to more efficiently search data and models, thereby freeing them to focus on the physics of the Universe.

Caputo, Daniel P.; Dolan, R.

2010-01-01

430

Gaussian Filters for Nonlinear Filtering Problems

In this paper we develop and analyze real-time and accurate filters for nonlinear filtering problems based on the Gaussian distributions. We present the systematic formulation of Gaussian filters and develop efficient and accurate numerical integration of the optimal filter. We also discuss the mixed Gaussian filters in which the conditional probability density is approximated by the sum of Gaussian distributions.

Kazufumi Ito; Kaiqi Xiong

1999-01-01

431

NASA Astrophysics Data System (ADS)

This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.

Sue-Ann, Goh; Ponnambalam, S. G.

432

Analog FIR Filter Used for Range-Optimal Pulsed Radar Applications

Matched filter is one of the most critical block in radar applications. With different measured range and relative velocity of a target we will need different bandwidth of the matched filter to maximize the radar signal to noise ratio (SNR...

Su, Eric Chen

2014-08-13