For comprehensive and current results, perform a real-time search at Science.gov.

1

Particle Filter with Swarm Move for Optimization

distributions, which has gained popularity for the last decade to solve sequential Bayesian inference problems of artificial dynamic distribution was designed to employ the particle filter algorithm. The basic idea, LNCS 5199, pp. 909918, 2008. c Springer-Verlag Berlin Heidelberg 2008 #12;910 C. Ji et al. birds

Yang, Shengxiang

2

Clever particle filters, sequential importance sampling and the optimal proposal

NASA Astrophysics Data System (ADS)

Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.

Snyder, Chris

2014-05-01

3

Particle Swarm Optimization of Passive Filters for Industrial Plants in Distribution Networks

Single-tuned passive filters are considered one of the most effective and economical means of harmonic mitigation. One important factor to consider while designing passive filters is the source voltage harmonics. This article presents a novel approach based on a particle swarm technique to optimize the design of the single-tuned passive filters for industrial plants in distribution networks. The filter design

H. H. Zeineldin; A. F. Zobaa

2011-01-01

4

NASA Astrophysics Data System (ADS)

We present design optimization of wavelength filters based on long period waveguide gratings (LPWGs) using the adaptive particle swarm optimization (APSO) technique. We demonstrate optimization of the LPWG parameters for single-band, wide-band and dual-band rejection filters for testing the convergence of APSO algorithms. After convergence tests on the algorithms, the optimization technique has been implemented to design more complicated application specific filters such as erbium doped fiber amplifier (EDFA) amplified spontaneous emission (ASE) flattening, erbium doped waveguide amplifier (EDWA) gain flattening and pre-defined broadband rejection filters. The technique is useful for designing and optimizing the parameters of LPWGs to achieve complicated application specific spectra.

Semwal, Girish; Rastogi, Vipul

2014-01-01

5

Particle swarm optimization-based approach for optical finite impulse response filter design.

This study presents what is to our knowledge a new and efficient method for the design of an optical finite impulse response (FIR) filter by employing a particle swarm optimization technique. With the method proposed, the design of an optical FIR filter, which is able to provide an arbitrary spectrum output based on crystal birefringence, could be implemented with good performance and high efficiency. The design procedure is discussed. A typical example of a green/magenta filter used in a liquid crystal on silicon projection display is included to demonstrate the feasibility and efficiency of this method in this design process as compared with simulated annealing. PMID:12645986

Zhou, Ying; Zeng, Guangjie; Yu, Feihong

2003-03-10

6

Visual Tracking & Particle Filters

Visual Tracking & Particle Filters Patrick Pérez Irisa, Feb. 2012 #12; Definition attempt : on level of prior) Picked objects: video object manually selected, interest points (corners, blobs://www.robots.ox.ac.uk/~vgg/research/vgoogle/ #12; Why is it harder that it might seem? temporal variability of visual appearance low video

LeGland, François

7

-varying (TV) parameters of a harmonic or chirp signal using particle fil- tering (PF) tools. Similar applications that involve a single TV harmonic or chirp signal, e.g., TV Doppler estimation in communications4598 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 56, NO. 10, OCTOBER 2008 Optimal Particle Filters

Sidiropoulos, Nikolaos D.

8

Implicit particle filters and applications

NASA Astrophysics Data System (ADS)

Implicit particle filters use a new sampling technique which finds high-probability samples of multi- dimensional probability densities by solving equations with a random input. With this new strategy, the particles generated by the filter will focus on high probability regions of the posterior probability density and, thus, the filter is applicable even if the state dimension is large. We will present the details of implicit particle filters and some applications to large scale problems.

Tu, X.

2012-12-01

9

Robert Collins Particle Filter Failures

CSE598C Robert Collins Particle Filter Failures References King and Forsyth, "How Does CONDENSATION in Stochastic Processes, 2nd edition, Academic Press, 1975. #12;CSE598C Robert Collins Particle Filter Failures Robert Collins Stationary Analysis For simplicity, we focus on tracking problems with stationary

Collins, Robert T.

10

Optimal filtering and filter stability of linear stochastic delay systems

NASA Technical Reports Server (NTRS)

Optimal filtering equations are obtained for very general linear stochastic delay systems. Stability of the optimal filter is studied in the case where there are no delays in the observations. Using the duality between linear filtering and control, asymptotic stability of the optimal filter is proved. Finally, the cascade of the optimal filter and the deterministic optimal quadratic control system is shown to be asymptotically stable as well.

Kwong, R. H.-S.; Willsky, A. S.

1977-01-01

11

OPTIMIZATION OF ADVANCED FILTER SYSTEMS

Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench-scale test program has also been developed based on the issues identified. The two advanced barrier filter systems have been found to have the potential to be significantly more reliable and less expensive to operate than standard ceramic candle filter system designs. Their key development requirements are the assessment of the design and manufacturing feasibility of the ceramic filter elements, and the small-scale demonstration of their conceptual reliability and availability merits.

R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert

1998-04-30

12

Design of Optimal Digital Filters

NASA Astrophysics Data System (ADS)

Four methods for designing digital filters optimal in the Chebyshev sense are developed. The properties of these filters are investigated and compared. An analytic method for designing narrow-band FIR filters using Zolotarev polynomials, which are extensions of Chebyshev polynomials, is proposed. Bandpass and bandstop narrow-band filters as well as lowpass and highpass filters can be designed by this method. The design procedure, related formulae and examples are presented. An improved method of designing optimal minimum phase FIR filters by directly finding zeros is proposed. The zeros off the unit circle are found by an efficient special purpose root-finding algorithm without deflation. The proposed algorithm utilizes the passband minimum ripple frequencies to establish the initial points, and employs a modified Newton's iteration to find the accurate initial points for a standard Newton's iteration. The proposed algorithm can be used to design very long filters (L = 325) with very high stopband attenuations. The design of FIR digital filters in the complex domain is investigated. The complex approximation problem is converted into a near equivalent real approximation problem. A standard linear programming algorithm is used to solve the real approximation problem. Additional constraints are introduced which allow weighting of the phase and/or group delay of the approximation. Digital filters are designed which have nearly constant group delay in the passbands. The desired constant group delay which gives the minimum Chebyshev error is found to be smaller than that of a linear phase filter of the same length. These filters, in addition to having a smaller, approximately constant group delay, have better magnitude characteristics than exactly linear phase filters with the same length. The filters have nearly equiripple magnitude and group delay. The problem of IIR digital filter design in the complex domain is formulated such that the existence of best approximation is guaranteed. An efficient and numerically stable algorithm for the design is proposed. The methods to establish a good initial point are investigated. Digital filters are designed which have nearly constant group delay in the passbands. The magnitudes of the filter poles near the passband edge are larger than of those far from the passband edge. A delay overshooting may occur in the transition band (don't care region), and it can be reduced by decreasing the maximum allowed pole magnitude of the design problem at the expense of increasing the approximation error.

Chen, Xiangkun

13

Distributed SLAM Using Improved Particle Filter for Mobile Robot Localization

The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness. PMID:24883362

Pei, Fujun; Wu, Mei; Zhang, Simin

2014-01-01

14

Design and optimization of nanostructured optical filters

NASA Astrophysics Data System (ADS)

Optical filters encompass a vast array of devices and structures for a wide variety of applications. Generally speaking, an optical filter is some structure that applies a designed amplitude and phase transform to an incident signal. Different classes of filters have vastly divergent characteristics, and one of the challenges in the optical design process is identifying the ideal filter for a given application and optimizing it to obtain a specific response. In particular, it is highly advantageous to obtain a filter that can be seamlessly integrated into an overall device package without requiring exotic fabrication steps, extremely sensitive alignments, or complicated conversions between optical and electrical signals. This dissertation explores three classes of nano-scale optical filters in an effort to obtain different types of dispersive response functions. First, dispersive waveguides are designed using a sub-wavelength periodic structure to transmit a single TE propagating mode with very high second order dispersion. Next, an innovative approach for decoupling waveguide trajectories from Bragg gratings is outlined and used to obtain a uniform second-order dispersion response while minimizing fabrication limitations. Finally, high Q-factor microcavities are coupled into axisymmetric pillar structures that offer extremely high group delay over very narrow transmission bandwidths. While these three novel filters are quite diverse in their operation and target applications, they offer extremely compact structures given the magnitude of the dispersion or group delay they introduce to an incident signal. They are also designed and structured as to be formed on an optical wafer scale using standard integrated circuit fabrication techniques. A number of frequency-domain numerical simulation methods are developed to fully characterize and model each of the different filters. The complete filter response, which includes the dispersion and delay characteristics and optical coupling, is used to evaluate each filter design concept. However, due to the complex nature of the structure geometries and electromagnetic interactions, an iterative optimization approach is required to improve the structure designs and obtain a suitable response. To this end, a Particle Swarm Optimization algorithm is developed and applied to the simulated filter responses to generate optimal filter designs.

Brown, Jeremiah Daniel

15

A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life

James N. Kennedy; Russell C. Eberhart

1995-01-01

16

Particle Swarm Optimization Toolbox

NASA Technical Reports Server (NTRS)

The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.

Grant, Michael J.

2010-01-01

17

Particle filter tracking for the banana problem

NASA Astrophysics Data System (ADS)

In this paper we present an approach for tracking with a high-bandwidth active sensor in very long range scenarios. We show that in these scenarios the extended Kalman filter is not desirable as it suffers from major consistency problems; and most flavors of particle filter suffer from a loss of diversity among particles after resampling. This leads to sample impoverishment and the divergence of the filter. In the scenarios studied, this loss of diversity can be attributed to the very low process noise. However, a regularized particle filter is shown to avoid this diversity problem while producing consistent results. The regularization is accomplished using a modified version of the Epanechnikov kernel.

Romeo, Kevin; Willett, Peter; Bar-Shalom, Yaakov

2013-09-01

18

Rickard Karlsson ISIS Particle Filtering in Practice

Rickard Karlsson ISIS 2004-11-04 Particle Filtering in Practice Sensor fusion, Positioning and Tracking Rickard Karlsson Automatic Control Linköping University, SWEDEN rickard@isy.liu.se #12;Rickard Karlsson ISIS Linköping 2004-11-05 Particle Filtering within ISIS from my perspective #12;Rickard Karlsson

Zhao, Yuxiao

19

Angle only tracking with particle flow filters

NASA Astrophysics Data System (ADS)

We show the results of numerical experiments for tracking ballistic missiles using only angle measurements. We compare the performance of an extended Kalman filter with a new nonlinear filter using particle flow to compute Bayes' rule. For certain difficult geometries, the particle flow filter is an order of magnitude more accurate than the EKF. Angle only tracking is of interest in several different sensors; for example, passive optics and radars in which range and Doppler data are spoiled by jamming.

Daum, Fred; Huang, Jim

2011-09-01

20

Application of particle filters to single-target tracking problems

NASA Astrophysics Data System (ADS)

In a Bayesian framework, all single target tracking problems reduce to recursive computation of the posterior density of the target state. Particle filters approximate the optimal Bayesian recursion by propagating a set of random samples with associated weights. In the last decade, there have been numerous contributions to the theory and applications of particle filters. Much study has focussed on design issues such as appropriate selection of the importance density, the use of resampling techniques which mitigate sample degeneracy and the choice of a suitable random variable space upon which to implement the particle filter in order to minimise numerical complexity. Although the effect of these design choices is, in general, well known, their relevance to target tracking problems has not been fully established. These design issues are considered for single target tracking applications involving target manoeuvres and clutter. Two choices of importance density are studied and methods for enhancing particle diversity through the avoidance of particle duplication in the resampling step are considered for each importance density. The possibility of reducing the dimension of the space over which the particle filter is implemented is considered. Based on simulation results, a few key observations are drawn about which aspects of particle filter design most influence their performance in target tracking applications. The numerical simulations also provide insights into the relationship between the state dimension and the number of particles needed to improve upon the performance of the standard tracking filters.

Morelande, Mark R.; Challa, Subhash; Gordon, Neil J.

2003-12-01

21

Early maritime applications of particle filtering

NASA Astrophysics Data System (ADS)

This paper provides a brief history of some operational particle filters that were used by the U. S. Coast Guard and U. S. Navy. Starting in 1974 the Coast Guard system provided Search and Rescue Planning advice for objects lost at sea. The Navy systems were used to plan searches for Soviet submarines in the Atlantic, Pacific, and Mediterranean starting in 1972. The systems operated in a sequential, Bayesian manner. A prior distribution for the target"s location and movement was produced using both objective and subjective information. Based on this distribution, the search assets available, and their detection characteristics, a near-optimal search was planned. Typically, this involved visual searches by Coast Guard aircraft and sonobuoy searches by Navy antisubmarine warfare patrol aircraft. The searches were executed, and the feedback, both detections and lack of detections, was fed into a particle filter to produce the posterior distribution of the target"s location. This distribution was used as the prior for the next iteration of planning and search.

Richardson, Henry R.; Stone, Lawrence D.; Monach, W. Reynolds; Discenza, Joseph H.

2004-01-01

22

Early maritime applications of particle filtering

NASA Astrophysics Data System (ADS)

This paper provides a brief history of some operational particle filters that were used by the U. S. Coast Guard and U. S. Navy. Starting in 1974 the Coast Guard system provided Search and Rescue Planning advice for objects lost at sea. The Navy systems were used to plan searches for Soviet submarines in the Atlantic, Pacific, and Mediterranean starting in 1972. The systems operated in a sequential, Bayesian manner. A prior distribution for the target"s location and movement was produced using both objective and subjective information. Based on this distribution, the search assets available, and their detection characteristics, a near-optimal search was planned. Typically, this involved visual searches by Coast Guard aircraft and sonobuoy searches by Navy antisubmarine warfare patrol aircraft. The searches were executed, and the feedback, both detections and lack of detections, was fed into a particle filter to produce the posterior distribution of the target"s location. This distribution was used as the prior for the next iteration of planning and search.

Richardson, Henry R.; Stone, Lawrence D.; Monach, W. Reynolds; Discenza, Joseph H.

2003-12-01

23

Design of optimal hybrid form FIR filter

This paper examines the problem of designing the opti- mal hybrid form FIR filter subjected to a minimum cycle- time constraint. We formulate the problem as one of de- termining the optimal partitioning of the hybrid form FIR filter into subsections. Each subsection can be optimized independently using other methods. We then show how the problem can be solved efficientlyusing

Kei-yong Khoo; Zhan Yu; Alan N. Willson Jr.

2001-01-01

24

Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995. As researchers have learned about\\u000a the technique, they have derived new versions, developed new applications, and published theoretical studies of the effects\\u000a of the various parameters and aspects of the algorithm. This paper comprises a snapshot of particle swarming from the authors\\u000a perspective, including variations in the

Riccardo Poli; James Kennedy; Tim Blackwell

2007-01-01

25

Master's Thesis Adaptive Particle Filter based on the

. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 II Particle filter 6 2.1 Auxiliary particle filter . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Gaussian particle filter in spherical system . . . . . . . . . . . . . . . . 21 3.8 Proposed pdf looks similar with shockwave

Berns, Karsten

26

OPTIMIZATION OF ADVANCED FILTER SYSTEMS

Two advanced, hot gas, barrier filter system concepts have been proposed by the Siemens Westinghouse Power Corporation to improve the reliability and availability of barrier filter systems in applications such as PFBC and IGCC power generation. The two hot gas, barrier filter system concepts, the inverted candle filter system and the sheet filter system, were the focus of bench-scale testing, data evaluations, and commercial cost evaluations to assess their feasibility as viable barrier filter systems. The program results show that the inverted candle filter system has high potential to be a highly reliable, commercially successful, hot gas, barrier filter system. Some types of thin-walled, standard candle filter elements can be used directly as inverted candle filter elements, and the development of a new type of filter element is not a requirement of this technology. Six types of inverted candle filter elements were procured and assessed in the program in cold flow and high-temperature test campaigns. The thin-walled McDermott 610 CFCC inverted candle filter elements, and the thin-walled Pall iron aluminide inverted candle filter elements are the best candidates for demonstration of the technology. Although the capital cost of the inverted candle filter system is estimated to range from about 0 to 15% greater than the capital cost of the standard candle filter system, the operating cost and life-cycle cost of the inverted candle filter system is expected to be superior to that of the standard candle filter system. Improved hot gas, barrier filter system availability will result in improved overall power plant economics. The inverted candle filter system is recommended for continued development through larger-scale testing in a coal-fueled test facility, and inverted candle containment equipment has been fabricated and shipped to a gasifier development site for potential future testing. Two types of sheet filter elements were procured and assessed in the program through cold flow and high-temperature testing. The Blasch, mullite-bonded alumina sheet filter element is the only candidate currently approaching qualification for demonstration, although this oxide-based, monolithic sheet filter element may be restricted to operating temperatures of 538 C (1000 F) or less. Many other types of ceramic and intermetallic sheet filter elements could be fabricated. The estimated capital cost of the sheet filter system is comparable to the capital cost of the standard candle filter system, although this cost estimate is very uncertain because the commercial price of sheet filter element manufacturing has not been established. The development of the sheet filter system could result in a higher reliability and availability than the standard candle filter system, but not as high as that of the inverted candle filter system. The sheet filter system has not reached the same level of development as the inverted candle filter system, and it will require more design development, filter element fabrication development, small-scale testing and evaluation before larger-scale testing could be recommended.

R.A. Newby; M.A. Alvin; G.J. Bruck; T.E. Lippert; E.E. Smeltzer; M.E. Stampahar

2002-06-30

27

This paper introduces a new approximate solution of the optimal nonlinear filter suitable for nonlinear oceanic and atmospheric data assimilation problems. The method is based on a local linearization in a low-rank kernel representation of the state's probability density function. In the resulting low-rank kernel particle Kalman (LRKPK) filter, the standard (weight type) particle filter correction is complemented by a

I. Hoteit; D.-T. Pham; G. Triantafyllou; G. Korres

2008-01-01

28

Diagnosis of a Roller Bearing Using Deterministic Particle Filtering

Diagnosis of a Roller Bearing Using Deterministic Particle Filtering Ouafae Bennis 1 and Frédéric.bennis@univ-orleans.fr, frederic.kratz@ensi-bourges.fr Abstract. In this article, the detection of a fault on the inner race of a roller bearing is presented as a problem of optimal estimation of a hidden fault, via measures delivered

Boyer, Edmond

29

Quantum Filtering and Optimal Control

NASA Astrophysics Data System (ADS)

Quantum mechanical systems exhibit an inherently probabilistic nature upon measurement which excludes in principle the singular direct observability case. Quantum theory of time continuous measurements and quantum filtering developed by VPB on the basis of semi-Markov independent increment models for quantum noise and quantum nondemolition (QND) observability is generalized for demolition indirect measurements of quantum unstable systems satisfying the microcausality principle. The reduced quantum feedback-controlled dynamics is described both by linear semi-Markov and nonlinear conditionally-Markov stochastic master equations. Using this scheme for diffusive and counting measurement to describe the stochastic evolution of the open quantum system under the continuous indirect observation and working in parallel with classical indeterministic control theory, we show the conditionally-Markov Bellman equations for optimal feedback control of the a posteriori stochastic quantum states conditioned upon these measurements. The resulting Bellman equation for the diffusive observation is then applied to the explicitly solvable quantum linear-quadratic-Gaussian (LQG) problem which emphasizes many similarities with the corresponding classical control problem.

Belavkin, Viacheslav P.; Edwards, Simon

2008-08-01

30

Optimal frequency domain textural edge detection filter

NASA Technical Reports Server (NTRS)

An optimal frequency domain textural edge detection filter is developed and its performance evaluated. For the given model and filter bandwidth, the filter maximizes the amount of output image energy placed within a specified resolution interval centered on the textural edge. Filter derivation is based on relating textural edge detection to tonal edge detection via the complex low-pass equivalent representation of narrowband bandpass signals and systems. The filter is specified in terms of the prolate spheroidal wave functions translated in frequency. Performance is evaluated using the asymptotic approximation version of the filter. This evaluation demonstrates satisfactory filter performance for ideal and nonideal textures. In addition, the filter can be adjusted to detect textural edges in noisy images at the expense of edge resolution.

Townsend, J. K.; Shanmugan, K. S.; Frost, V. S.

1985-01-01

31

Westinghouse advanced particle filter system

Integrated Gasification Combined Cycles (IGCC), Pressurized Fluidized Bed Combustion (PFBC) and Advanced PFBC (APFB) are being developed and demonstrated for commercial power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC, PFBC and APFB in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of these advanced, solid fuel power generation cycles.

Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.

1995-11-01

32

Parameter optimization and performance of backwashing in biological aerated filters

The backwashing optimization, which included gas washing, water washing and water rinsing, was carried out in three biological aerated filters with zeolite, ceramic particle and carbonate media. The recovery of headloss and treatment efficiency in the biofilters after backwashing were also investigated for deeper understanding of the relationship of headloss and backwashing in BAF. The results showed that the COD

Liping Qiu; Guangwei Wang; Shoubin Zhang; Jingying Chen; Yongzheng Liu

2010-01-01

33

Westinghouse advanced particle filter system

Integrated Gasification Combined Cycles (IGCC) and Pressurized Fluidized Bed Combustion (PFBC) are being developed and demonstrated for commercial, power generation application. Hot gas particulate filters are key components for the successful implementation of IGCC and PFBC in power generation gas turbine cycles. The objective of this work is to develop and qualify through analysis and testing a practical hot gas ceramic barrier filter system that meets the performance and operational requirements of PFBC and IGCC systems. This paper updates the assessment of the Westinghouse hot gas filter design based on ongoing testing and analysis. Results are summarized from recent computational fluid dynamics modeling of the plenum flow during back pulse, analysis of candle stressing under cleaning and process transient conditions and testing and analysis to evaluate potential flow induced candle vibration.

Lippert, T.E.; Bruck, G.J.; Sanjana, Z.N.; Newby, R.A.

1994-10-01

34

Distance estimation using RSSI and particle filter.

This paper presents a particle filter algorithm for distance estimation using multiple antennas on the receiver's side and only one transmitter, where a received signal strength indicator (RSSI) of radio frequency was used. Two different placements of antennas were considered (parallel and circular). The physical layer of IEEE standard 802.15.4 was used for communication between transmitter and receiver. The distance was estimated as the hidden state of a stochastic system and therefore a particle filter was implemented. The RSSI acquisitions were used for the computation of important weights within the particle filter algorithm. The weighted particles were re-sampled in order to ensure proper distribution and density. Log-normal and ground reflection propagation models were used for the modeling of a prior distribution within a Bayesian inference. PMID:25457044

Sve?ko, Janja; Malajner, Marko; Gleich, Duan

2014-11-14

35

Optimization of Cosine Modulated Filter Bank for Narrowband RFI

Optimization of Cosine Modulated Filter Bank for Narrowband RFI Yingsi Liang Department perfect- reconstruction (PR) filter bank that is tailored to mitigate the effects of narrowband radio frequency interfer- ence (RFI). The conventionally used optimization criterion for bandpass filtering

Rajan, Dinesh

36

Testing particle filters on convective scale dynamics

NASA Astrophysics Data System (ADS)

Particle filters have been developed in recent years to deal with highly nonlinear dynamics and non Gaussian error statistics that also characterize data assimilation on convective scales. In this work we explore the use of the efficient particle filter (P.v. Leeuwen, 2011) for convective scale data assimilation application. The method is tested in idealized setting, on two stochastic models. The models were designed to reproduce some of the properties of convection, for example the rapid development and decay of convective clouds. The first model is a simple one-dimensional, discrete state birth-death model of clouds (Craig and Wrsch, 2012). For this model, the efficient particle filter that includes nudging the variables shows significant improvement compared to Ensemble Kalman Filter and Sequential Importance Resampling (SIR) particle filter. The success of the combination of nudging and resampling, measured as RMS error with respect to the 'true state', is proportional to the nudging intensity. Significantly, even a very weak nudging intensity brings notable improvement over SIR. The second model is a modified version of a stochastic shallow water model (Wrsch and Craig 2013), which contains more realistic dynamical characteristics of convective scale phenomena. Using the efficient particle filter and different combination of observations of the three field variables (wind, water 'height' and rain) allows the particle filter to be evaluated in comparison to a regime where only nudging is used. Sensitivity to the properties of the model error covariance is also considered. Finally, criteria are identified under which the efficient particle filter outperforms nudging alone. References: Craig, G. C. and M. Wrsch, 2012: The impact of localization and observation averaging for convective-scale data assimilation in a simple stochastic model. Q. J. R. Meteorol. Soc.,139, 515-523. Van Leeuwen, P. J., 2011: Efficient non-linear data assimilation in geophysical fluid dynamics. - Computers and Fluids, doi:10,1016/j.compfluid.2010.11.011, 1096 2011. Wrsch, M. and G. C. Craig, 2013: A simple dynamical model of cumulus convection for data assimilation research, submitted to Met. Zeitschrift.

Haslehner, Mylene; Craig, George. C.; Janjic, Tijana

2014-05-01

37

An implicit particle filter for large dimensional data assimilation problems

NASA Astrophysics Data System (ADS)

Particle filters for data assimilation are usually presented in terms of an Ito stochastic ordinary differential equation (SODE). The task is to estimate the state a(t) of the SODE, with additional information provided by noisy observations bn, n=1,2,..., of this state. In principle, the solution of this problem is known: the optimal estimate of the state is the expected value of the solution of the model conditioned on available observations. The conditional mean can be calculated once the conditional probability density function (pdf) pn+1=p(a(tn+1)|b0,..., bn) is known. A particle filters approximates pn+1 by sequential Monte Carlo. A Sampling-Importance-Resampling (SIR) filter constructs, at each time tn, a prior density by following replicas of the model (called particles). The prior is updated by sampling weights determined by the observations bn+1, to yield a posterior density that approximates pn+1. Because the observations are not used when constructing the prior, particles paths are very likely to stray into regions of low probability and the number of particles required can grow catastrophically, especially if the dimension of the SODE is large. The implicit particle filter is a new sequential Monte Carlo method to solve the data assimilation problem. The filter was devised for large dimensional, non-linear, and non-Gaussian problems in which current methods fail or yield poor results. In essence, the implicit filter reverses the standard procedure described above. It first assigns a probability to each particle and then finds a sample that assumes it. This reversed procedure focusses all particles towards the observations and, thus, generates a thin particle beam to keep the particles in the high probability domain. Because the filter produces high probability samples only, the number of particles required remains manageable. We present a new and very efficient implementation of the implicit particle filter for use in large dimensional data assimilation problems. Our implementation relies on a clever non-linear change of variables. The change of variables reduces a data assimilation problem of arbitrary size to solving a single algebraic equation in only one variable. We demonstrate the performance of our filter by applying it to the stochastic Kuramoto-Sivashinsky (SKS) equation. This equation is known to exhibit space-time chaos. We project its solution into an N-dimensional subspace spanned by N Fourier modes to obtain an N-dimensional Ito-Galerkin approximation. We vary the viscosity and the continuity (in space) of the noise process driving the equation to generate a variety of test problems, with dimensions ranging from 32 to 512. Linear and nonlinear observation operators are considered. We also outline how to deal with observations that are sparse in space and time. The performance of the implicit filters is compared to the performance of an SIR filter. The numerical results confirm that the implicit filter gives accurate state estimates by tracking only very few, but sharply focused, particles. The implicit filter also outperforms SIR in all cases considered.

Morzfeld, M.; Chorin, A. J.; Tu, X.

2010-12-01

38

On Optimal Infinite Impulse Response Edge Detection Filters

The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated

Sudeep Sarkar; Kim L. Boyer

1991-01-01

39

MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

NASA Technical Reports Server (NTRS)

The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.

Barton, R. S.

1994-01-01

40

Small curvature particle flow for nonlinear filters

NASA Astrophysics Data System (ADS)

We derive five new particle flow algorithms for nonlinear filters based on the small curvature approximation inspired by fluid dynamics. We find it extremely interesting that this physically motivated approximation generalizes two of our previous exact flow algorithms, namely incompressible flow and Gaussian flow. We derive a new algorithm to compute the inverse of the sum of two linear differential operators using a second homotopy, similar to Feynman's perturbation theory for quantum electrodynamics as well as Gromov's h-principle.

Daum, Fred; Huang, Jim

2012-05-01

41

Point Set Registration via Particle Filtering and Stochastic Dynamics

In this paper, we propose a particle filtering approach for the problem of registering two point sets that differ by a rigid body transformation. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in pose parameters obtained by running a few iterations of a certain local optimizer. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer approaches for registration. Thus, the novelty of our method is threefold: First, we employ a particle filtering scheme to drive the point set registration process. Second, we present a local optimizer that is motivated by the correlation measure. Third, we increase the robustness of the registration performance by introducing a dynamic model of uncertainty for the transformation parameters. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity (with respect to particle size) as well as maintains the temporal coherency of the state (no loss of information). Also unlike some alternative approaches for point set registration, we make no geometric assumptions on the two data sets. Experimental results are provided that demonstrate the robustness of the algorithm to initialization, noise, missing structures, and/or differing point densities in each set, on several challenging 2D and 3D registration scenarios. PMID:20558877

Sandhu, Romeil; Dambreville, Samuel; Tannenbaum, Allen

2013-01-01

42

Particle swarm optimization in electromagnetics

The particle swarm optimization (PSO), new to the electromagnetics community, is a robust stochastic evolutionary computation technique based on the movement and intelligence of swarms. This paper introduces a conceptual overview and detailed explanation of the PSO algorithm, as well as how it can be used for electromagnetic optimizations. This paper also presents several results illustrating the swarm behavior in

Jacob Robinson; Yahya Rahmat-Samii

2004-01-01

43

Optimal two-stage filtering of elastograms.

In ultrasound elastography, tissue axial strains are obtained through the differentiation of measured axial displacements. However, during the measurement process, the displacement signals are often contaminated with de-correlation noise caused by changes in the speckle pattern in the tissue. Thus, the application of the gradient operator on the displacement signals results in the presence of amplified noise in the axial strains, which severely obscures the useful information. The use of an effective denoising scheme is therefore imperative. In this paper, a method based on a two-stage consecutive filtering approach is proposed for the accurate estimation of axial strains. The presented method considers a cascaded system of a frequency filter and a time window, which are both designed such that the overall system operates optimally in a mean square error sense. Experimentation on simulated signals shows that the two-stage scheme employed in this study has good potential as a denoising method for ultrasound elastograms. PMID:22254895

Subramaniam, Suba R; Hon, Tsz K; Ling, Wing-Kuen; Georgakis, Apostolos

2011-01-01

44

Optimal edge filters explain human blur detection.

Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur. PMID:22984222

McIlhagga, William H; May, Keith A

2012-01-01

45

Particle Filtering for State Estimation in Nonlinear Industrial Systems

State estimation is a major problem in industrial systems, particularly in industrial robotics. To this end, Gaussian and nonparametric filters have been developed. In this paper, the extended Kalman filter, which assumes Gaussian measurement noise, is compared with the particle filter, which does not make any assumption on the measurement noise distribution. As a case study, the estimation of the

Gerasimos G. Rigatos

2009-01-01

46

Dynamic Neural Field Optimization using the Unscented Kalman Filter

Dynamic Neural Field Optimization using the Unscented Kalman Filter Jeremy Fix, Matthieu Geist Kalman filters, a derivative-free algorithm for parameter estimation, which reveals to efficiently function which may be difficult or at least costly to perform. Kalman filters are a popular collection

Paris-Sud XI, Université de

47

Multispectral image denoising with optimized vector bilateral filter.

Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios (SNRs). Typical vector bilateral filtering described in the literature does not use parameters satisfying optimality criteria. We introduce an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimization of the Stein's unbiased risk estimate of this nonlinear estimator. Along the way, we provide a plausibility argument through an analytical example as to why vector bilateral filtering outperforms bandwise 2D bilateral filtering in enhancing SNR. Experimental results show that the optimized vector bilateral filter provides improved denoising performance on multispectral images when compared with several other approaches. PMID:24184727

Peng, Honghong; Rao, Raghuveer; Dianat, Sohail A

2014-01-01

48

Optimization design of filter banks in subband image coding

In this paper we present a new optimization-based method for designing filter banks in subband image coding. We formulate the design problem as a nonlinear optimization problem whose objective consists of both the performance metrics of the image coder; such as the peak signal to noise ratio (PSNR), and those of individual filters. In contrast to previous methods that design

Yi Shang; Longzhuang Li

1999-01-01

49

Human-Manipulator Interface Using Particle Filter

This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator. PMID:24757430

Wang, Xueqian

2014-01-01

50

Human-manipulator interface using particle filter.

This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator. PMID:24757430

Du, Guanglong; Zhang, Ping; Wang, Xueqian

2014-01-01

51

In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453

Cheng, Wen-Chang

2012-01-01

52

Some issues and results on the EnKF and particle filters for meteorological models

Some issues and results on the EnKF and particle filters for meteorological models Chaos 2009KF and particle filters for meteorological models #12;The nonlinear filtering problem Particle Filter resolution C. Baehr & O. Pannekoucke EnKF and particle filters for meteorological models #12;2 / 26 Nonlinear

Baehr, Christophe

53

A hybrid method for optimization of the adaptive Goldstein filter

NASA Astrophysics Data System (ADS)

The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

2014-12-01

54

Behaviour based particle filtering for human articulated motion tracking

This paper presents an approach to human motion tracking using multiple pre-trained activity models for propagation of particles in Annealed Particle Filtering. Hidden Markov models are trained on dimensionally reduced joint angle data to produce models of activ- ity. Particles are divided between models for propaga- tion by HMM synthesis, before converging on a solu- tion during the annealing process.

John Darby; Baihua Li; Nicholas Paul Costen

2008-01-01

55

Gravity gradient-terrain aided navigation based on particle filter

NASA Astrophysics Data System (ADS)

Based on Particle Filter, Gravity Gradient-Terrain aided position technology is proposed in this paper. With the sensitivity of gravity gradient to terrain, the gravity gradient reference map can be computed from the local terrain elevation data. The position can be actualized through matching the real-time measured gravity gradient data to the prepared gravity gradient reference map. The most widely used approximate filtering method is the extended Kaman filter (EKF). EKF is computationally simple but, the convergence of the state estimation for the position is not guaranteed. Particle filter (PF) makes use of the non-linear state and measurement functions, no linearization technology is needed. PF can assure the convergence of the state estimation which follows from the classical results on convergence of Bayesian estimators. Simulations have been done and Particle filter has been shown to be a superior alternative to the EKF in the gravity gradient-terrain matching navigation systems.

Xiong, Ling; Ma, Jie; Tian, Jin-Wen

2009-10-01

56

Random set particle filter for bearings-only multitarget tracking

The random set approach to multitarget tracking is a theoretically sound framework that covers joint estimation of the number of targets and the state of the targets. This paper describes a particle filter implementation of the random set multitarget filter. The contribution of this paper to the random set tracking framework is the formulation of a measurement model where each

Matti Vihola

2005-01-01

57

Speaker Tracking Using an Audio-visual Particle Filter

We present an approach for tracking a lecturer during the course of his speech. We use features from multiple cameras and micro- phones, and process them in a joint particle filter framework. The filter performs sampled projections of 3D location hypotheses and scores them using features from both audio and video. On the video side, the features are based on

Kai Nickel; Tobias Gehrig; Hazim K. Ekenel; Rainer Stiefelhagen

58

Particle filtering with Lagrangian data in a point vortex model

Particle filtering is a technique used for state estimation from noisy measurements. In fluid dynamics, a popular problem called Lagrangian data assimilation (LaDA) uses Lagrangian measurements in the form of tracer positions ...

Mitra, Subhadeep

2012-01-01

59

Particle Methods for Filtering & Uncertainty Propagations P. Del Moral

, ... · Physics (sensors : pressure/temperature/...). · Finance (assets, portfolios,...). · Statistics (real dataParticle Methods for Filtering & Uncertainty Propagations P. Del Moral INRIA team ALEA, INRIA, pressure/temperature/diffusion coefficients,...). · Finance (assets, portfolios, volatilities, default

Del Moral , Pierre

60

Fine-particle filter prevents damage to vacuum pumps

NASA Technical Reports Server (NTRS)

A filter system for mechanical pumps is designed with a baffle assembly that rotates in a circulating oil bath which traps destructive particles. This prevents severe damage to the pump and is serviceable for long periods before it requires cleaning.

Harlamert, P., Jr.

1964-01-01

61

Better Proposal Distributions: Object Tracking Using Unscented Particle Filter

Tracking objects involves the modeling of non-linear non- Gaussian systems. On one hand, variants of Kalman filters are limited by their Gaussian assumptions. On the other hand, conventional particle filter, e.g.,CONDENSATION, uses transition prior as the proposal distribution. The transition prior does not take into account current observation data, and many particles can therefore be wasted in low likelihood area.

Yong Rui; Yunqiang Chen

2001-01-01

62

This work presents a comparison between two localization algorithms, an extended kalman filter and particle filter, using range data obtained from radar based system. With four transponders and one mobile base station, the system provides real time the radians to transponders three times per second like a frequency modulated continuous wave radar operating in 5.8 GHz ISM band. Extended kalman

Haytham Qasem; Leonhard Reindl

63

Optimization of filtering schemes for broadband astro-combs

line spacing; (2) power amplifiers to boost the power of pulses from the filtered comb; and (3) highlyOptimization of filtering schemes for broadband astro-combs Guoqing Chang,1,2,* Chih-Hao Li,3 David, Harvard University, Cambridge Massachusetts 02138, USA *guoqing@mit.edu Abstract: To realize a broadband

Walsworth, Ronald L.

64

Optimally Robust Kalman Filtering at Work: AO-, IO-, and Simultaneously IO-and AO-Robust Filters

Optimally Robust Kalman Filtering at Work: AO-, IO-, and Simultaneously IO- and AO- Robust Filters Abstract We take up optimality results for robust Kalman filtering from Ruckdeschel (2001, 2010) where. (2006), Fried et al. (2007). Keywords: robustness, Kalman Filter, innovation outlier, additive outlier

Ruckdeschel, Peter

65

Genetic particle filter application to land surface temperature downscaling

NASA Astrophysics Data System (ADS)

Thermal infrared data are widely used for surface flux estimation giving the possibility to assess water and energy budgets through land surface temperature (LST). Many applications require both high spatial resolution (HSR) and high temporal resolution (HTR), which are not presently available from space. It is therefore necessary to develop methodologies to use the coarse spatial/high temporal resolutions LST remote-sensing products for a better monitoring of fluxes at appropriate scales. For that purpose, a data assimilation method was developed to downscale LST based on particle filtering. The basic tenet of our approach is to constrain LST dynamics simulated at both HSR and HTR, through the optimization of aggregated temperatures at the coarse observation scale. Thus, a genetic particle filter (GPF) data assimilation scheme was implemented and applied to a land surface model which simulates prior subpixel temperatures. First, the GPF downscaling scheme was tested on pseudoobservations generated in the framework of the study area landscape (Crau-Camargue, France) and climate for the year 2006. The GPF performances were evaluated against observation errors and temporal sampling. Results show that GPF outperforms prior model estimations. Finally, the GPF method was applied on Spinning Enhanced Visible and InfraRed Imager time series and evaluated against HSR data provided by an Advanced Spaceborne Thermal Emission and Reflection Radiometer image acquired on 26 July 2006. The temperatures of seven land cover classes present in the study area were estimated with root-mean-square errors less than 2.4 K which is a very promising result for downscaling LST satellite products.

Mechri, Rihab; Ottl, Catherine; Pannekoucke, Olivier; Kallel, Abdelaziz

2014-03-01

66

NASA Astrophysics Data System (ADS)

This research aims to develop a methodological framework based on a data driven approach known as particle filters, often found in computer vision methods, to correct the effect of respiratory motion on Nuclear Medicine imaging data. Particles filters are a popular class of numerical methods for solving optimal estimation problems and we wish to use their flexibility to make an adaptive framework. In this work we use the particle filter for estimating the deformation of the internal organs of the human torso, represented by X, over a discrete time index k. The particle filter approximates the distribution of the deformation of internal organs by generating many propositions, called particles. The posterior estimate is inferred from an observation Zk of the external torso surface. We demonstrate two preliminary approaches in tracking organ deformation. In the first approach, Xk represent a small set of organ surface points. In the second approach, Xk represent a set of affine organ registration parameters to a reference time index r. Both approaches are contrasted to a comparable technique using direct mapping to infer Xk from the observation Zk. Simulations of both approaches using the XCAT phantom suggest that the particle filter-based approaches, on average performs, better.

Abd. Rahni, Ashrani Aizzuddin; Lewis, Emma; Wells, Kevin; Guy, Matthew; Goswami, Budhaditya

2010-03-01

67

Particle filter for long range radar in RUV

NASA Astrophysics Data System (ADS)

In this paper we present an approach for tracking with a high-bandwidth active radar in long range scenarios with 3-D measurements in r-u-v coordinates. The 3-D low-process-noise scenarios considered are much more difficult than the ones we have previously investigated where measurements were in 2-D (i.e., polar coordinates). We show that in these 3-D scenarios the extended Kalman filter and its variants are not desirable as they suffer from either major consistency problems or degraded range accuracy, and most flavors of particle filter suffer from a loss of diversity among particles after resampling. This leads to sample impoverishment and divergence of the filter. In the scenarios studied, this loss of diversity can be attributed to the very low process noise. However, a regularized particle filter is shown to avoid this diversity problem while producing consistent results. The regularization is accomplished using a modified version of the Epanechnikov kernel.

Romeo, Kevin; Willett, Peter; Bar-Shalom, Yaakov

2014-06-01

68

COMPUTATIONS ON THE PERFORMANCE OF PARTICLE FILTERS AND ELECTRONIC AIR CLEANERS

The paper discusses computations on the performance of particle filters and electronic air cleaners (EACs). The collection efficiency of particle filters and ACs is calculable if certain factors can be assumed or calibrated. For fibrous particulate filters, measurement of colle...

69

COMPUTATIONS ON THE PERFORMANCE OF PARTICLE FILTERS AND ELECTRONIC AIR CLEANERS

The paper discusses computations on the performance of particle filters and electronic air cleaners (EACs). he collection efficiency of particle filters and ACs is calculable if certain factors can be assumed or calibrated. or fibrous particulate filters, measurement of collectio...

70

Optimization of tunable silicon compatible microring filters

Microring resonators can be used as pass-band filters for wavelength division demultiplexing in electronic-photonic integrated circuits for applications such as analog-to-digital converters (ADCs). For high quality signal ...

Amatya, Reja

2008-01-01

71

Development of Golden Section Search Driven Particle Swarm Optimization and its Application

The particle swarm optimization (PSO), although it has been widely used in various fields, has a step-size problem, which deteriorates optimization performance. This problem is resolved using the golden section search (GSS) and the steepest descent method. We also design a filter that will improve optimization performance of the proposed algorithm. The effectiveness of the proposed algorithm, including for which

S. Oh; Y. Hori

2006-01-01

72

Geomagnetic modeling by optimal recursive filtering

NASA Technical Reports Server (NTRS)

The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.

Gibbs, B. P.; Estes, R. H.

1981-01-01

73

Effects of particle size and velocity on burial depth of airborne particles in glass fiber filters

Air sampling for particulate radioactive material involves collecting airborne particles on a filter and then determining the amount of radioactivity collected per unit volume of air drawn through the filter. The amount of radioactivity collected is frequently determined by directly measuring the radiation emitted from the particles collected on the filter. Counting losses caused by the particle becoming buried in the filter matrix may cause concentrations of airborne particulate radioactive materials to be underestimated by as much as 50%. Furthermore, the dose calculation for inhaled radionuclides will also be affected. The present study was designed to evaluate the extent to which particle size and sampling velocity influence burial depth in glass-fiber filters. Aerosols of high-fired /sup 239/PuO/sub 2/ were collected at various sampling velocities on glass-fiber filters. The fraction of alpha counts lost due to burial was determined as the ratio of activity detected by direct alpha count to the quantity determined by photon spectrometry. The results show that burial of airborne particles collected on glass-fiber filters appears to be a weak function of sampling velocity and particle size. Counting losses ranged from 0 to 25%. A correction that assumes losses of 10 to 15% would ensure that the concentration of airborne alpha-emitting radionuclides would not be underestimated when glass-fiber filters are used. 32 references, 21 figures, 11 tables.

Higby, D.P.

1984-11-01

74

A new optimizer using particle swarm theory

The optimization of nonlinear functions using particle swarm methodology is described. Implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm. Benchmark testing of both paradigms is described, and applications, including neural network training and robot task learning, are proposed. Relationships between particle swarm optimization and both artificial life and evolutionary computation are reviewed

Russell Eberhart; James Kennedy

1995-01-01

75

Particle swarm optimization for unsupervised robotic learning

We explore using particle swarm optimization on problems with noisy performance evaluation, focusing on unsupervised robotic learning. We adapt a technique of overcoming noise used in genetic algorithms for use with particle swarm optimization, and evaluate the performance of both the original algorithm and the noise-resistant method for several numerical problems with added noise, as well as unsupervised learning of

Jim Pugh; Alcherio Martinoli; Yizhen Zhang

2005-01-01

76

Sequential Bearings-Only-Tracking Initiation with Particle Filtering Method

The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramr-Rao bounds are also involved for performance evaluation. PMID:24453865

Hao, Chengpeng

2013-01-01

77

Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

Zhou, Ning; Huang, Zhenyu; Welch, Greg; Zhang, J.

2012-05-24

78

Design of optimal correlation filters for hybrid vision systems

NASA Technical Reports Server (NTRS)

Research is underway at the NASA Johnson Space Center on the development of vision systems that recognize objects and estimate their position by processing their images. This is a crucial task in many space applications such as autonomous landing on Mars sites, satellite inspection and repair, and docking of space shuttle and space station. Currently available algorithms and hardware are too slow to be suitable for these tasks. Electronic digital hardware exhibits superior performance in computing and control; however, they take too much time to carry out important signal processing operations such as Fourier transformation of image data and calculation of correlation between two images. Fortunately, because of the inherent parallelism, optical devices can carry out these operations very fast, although they are not quite suitable for computation and control type operations. Hence, investigations are currently being conducted on the development of hybrid vision systems that utilize both optical techniques and digital processing jointly to carry out the object recognition tasks in real time. Algorithms for the design of optimal filters for use in hybrid vision systems were developed. Specifically, an algorithm was developed for the design of real-valued frequency plane correlation filters. Furthermore, research was also conducted on designing correlation filters optimal in the sense of providing maximum signal-to-nose ratio when noise is present in the detectors in the correlation plane. Algorithms were developed for the design of different types of optimal filters: complex filters, real-value filters, phase-only filters, ternary-valued filters, coupled filters. This report presents some of these algorithms in detail along with their derivations.

Rajan, Periasamy K.

1990-01-01

79

Optimal Filtering Methods to Structural Damage Estimation under Ground Excitation

This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869

Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan

2013-01-01

80

Robustness of optimal binary filters: analysis and design

, when Xs, k = 1, 0, is observed and p = 1/n. Increase of error for optimal filters, when Xq, k = 1, 0, is observed and p & 1/n. Realizations of triplex (a), sans-serif (b), and gothic (c) fonts. 23 Sparse-noise-degraded realizations of triplex (a... of default font; noise intensity is 0. 030. 32 14 Results of filters designed for triplex font for noise intensity p' = 0. 03, 0. 06, and 0. 01. 33 15 Results of filters designed for gothic font for noise intensity p' = 0. 03, 0. 06, and 0. 01. 34 FIGURE...

Grigoryan, Artyom M

1999-01-01

81

Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

NASA Astrophysics Data System (ADS)

We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

Bruno, Marcelo G. S.; Dias, Stiven S.

2014-12-01

82

Nonlinear Statistical Signal Processing: A Particle Filtering Approach

A introduction to particle filtering is discussed starting with an overview of Bayesian inference from batch to sequential processors. Once the evolving Bayesian paradigm is established, simulation-based methods using sampling theory and Monte Carlo realizations are discussed. Here the usual limitations of nonlinear approximations and non-gaussian processes prevalent in classical nonlinear processing algorithms (e.g. Kalman filters) are no longer a restriction to perform Bayesian inference. It is shown how the underlying hidden or state variables are easily assimilated into this Bayesian construct. Importance sampling methods are then discussed and shown how they can be extended to sequential solutions implemented using Markovian state-space models as a natural evolution. With this in mind, the idea of a particle filter, which is a discrete representation of a probability distribution, is developed and shown how it can be implemented using sequential importance sampling/resampling methods. Finally, an application is briefly discussed comparing the performance of the particle filter designs with classical nonlinear filter implementations.

Candy, J

2007-09-19

83

Microscopical examination of particles on smoked cigarette filters

Cigarette butts collected from crime scenes can play an important role in forensic investigations by providing a DNA link\\u000a to a victim or suspect. Microscopic particles can frequently be seen on smoked cigarette filters with stereomicroscopy. The\\u000a authors are not aware of previous published attempts to identify this material. These particles were examined with transmission\\u000a and scanning electron microscopy and

Charles A. Linch; Joseph A. Prahlow

2008-01-01

84

Tracking With a Hierarchical Partitioned Particle Filter and Movement Modelling

We present an approach to track human subjects us- ing an articulated human framework. First, we describe the artic- ulated hierarchical human model. Second, we develop a stochastic hierarchical, partitioned, particle filter based on the natural struc- ture and limb dependency of the human body. We apply this to track human subjects in video sequences using likelihoods adapted to the

Zsolt L. Husz; Andrew M. Wallace; Patrick R. Green

2011-01-01

85

Tracking Deforming Objects Using Particle Filtering for Geometric Active Contours

proposed, perhaps most prominently the B- spline representation used for a "snake model" as in [2]. Isard and Blake (see [1] and the references therein) use the B-spline representation for contours of objects knowledge, this is the first attempt to implement an approximate particle filtering algorithm for tracking

Vaswani, Namrata

86

Multi-robot Simultaneous Localization and Mapping using Particle Filters

Multi-robot Simultaneous Localization and Mapping using Particle Filters Andrew Howard NASA Jet Propulsion Laboratory Pasadena, California 91109, U.S.A. Email: abhoward@robotics.jpl.nasa.gov Abstract-- This paper describes an on-line algorithm for multi- robot simultaneous localization and mapping (SLAM). We

Del Moral , Pierre

87

Fast, parallel implementation of particle filtering on the GPU architecture

NASA Astrophysics Data System (ADS)

In this paper, we introduce a modified cellular particle filter (CPF) which we mapped on a graphics processing unit (GPU) architecture. We developed this filter adaptation using a state-of-the art CPF technique. Mapping this filter realization on a highly parallel architecture entailed a shift in the logical representation of the particles. In this process, the original two-dimensional organization is reordered as a one-dimensional ring topology. We proposed a proof-of-concept measurement on two models with an NVIDIA Fermi architecture GPU. This design achieved a 411- ?s kernel time per state and a 77-ms global running time for all states for 16,384 particles with a 256 neighbourhood size on a sequence of 24 states for a bearing-only tracking model. For a commonly used benchmark model at the same configuration, we achieved a 266- ?s kernel time per state and a 124-ms global running time for all 100 states. Kernel time includes random number generation on the GPU with curand. These results attest to the effective and fast use of the particle filter in high-dimensional, real-time applications.

Gelencsr-Horvth, Anna; Tornai, Gbor Jnos; Horvth, Andrs; Cserey, Gyrgy

2013-12-01

88

Business Cycle and Stock Market Volatility: A Particle Filter Approach

Business Cycle and Stock Market Volatility: A Particle Filter Approach Roberto Casarin¶ and Carmine in the volatilities of the business cycle and stock market valuations by estimating a Markov switching stochastic in both business cycle and stock market variables along similar patterns. Keywords: Markov Switching

Del Moral , Pierre

89

Fusing Depth and Video using Rao-Blackwellized Particle Filter

Structure from Mo- tion (SfM) starting with a flat depth map. Instead of using 3D depths, we formulateFusing Depth and Video using Rao-Blackwellized Particle Filter Amit Agrawal and Rama Chellappa of fusing sparse and noisy depth data obtained from a range finder with features obtained from intensity

Agrawal, Amit

90

ESTIMATION AND CONTROL OF INDUSTRIAL PROCESSES WITH PARTICLE FILTERS

ESTIMATION AND CONTROL OF INDUSTRIAL PROCESSES WITH PARTICLE FILTERS Rub´en Morales of industrial processes. In particular, we adopt a jump Markov linear Gaussian (JMLG) model to describe an industrial heat exchanger. The parameters of this model are identi- fied with the expectation maximisation

de Freitas, Nando

91

Blended Particle Filters for Large Dimensional Chaotic Dynamical Systems

challenge in contemporary data science is the devel- opment of statistically accurate particle filters which capture non-Gaussian fea- tures in an adaptively evolving low dimensional subspace through systems Many contemporary problems in science ranging from protein folding in molecular dynamics

Majda, Andrew J.

92

Probabilistic White Matter Fiber Tracking using Particle Filtering and von Mises-Fisher Sampling

Standard particle filtering technique have previously been applied to the problem of fiber tracking by Brun et al. (2002) and Bjornemo et al. (2002). However, these previous attempts have not utilised the full power of the technique, and as a result the fiber paths were tracked in a goal directed way. In this paper we provide an advanced technique by presenting a fast and novel probabilistic method for white matter fiber tracking in diffusion weighted MRI (DWI), which takes advantage of the weighting and resampling mechanism of particle filtering. We formulate fiber tracking using a nonlinear state space model which captures both smoothness regularity of the fibers and the uncertainties in the local fiber orientations due to noise and partial volume effects. Global fiber tracking is then posed as a problem of particle filtering. To model the posterior distribution, we classify voxels of the white matter as either prolate or oblate tensors. We then construct the orientation distributions for prolate and oblate tensors separately. Finally, the importance density function for particle filtering is modeled using the von Mises-Fisher distribution on a unit sphere. Fast and efficient sampling is achieved using Ulrich-Woods simulation algorithm. Given a seed point, the method is able to rapidly locate the globally optimal fiber and also provides a probability map for potential connections. The proposed method is validated and compared to alternative methods both on synthetic data and real-world brain MRI datasets. PMID:18602332

Zhang, Fan; Hancock, Edwin R.; Goodlett, Casey; Gerig, Guido

2009-01-01

93

Optimal Correlation Filters for Images with Signal-Dependent Noise

NASA Technical Reports Server (NTRS)

We address the design of optimal correlation filters for pattern detection and recognition in the presence of signal-dependent image noise sources. The particular examples considered are film-grain noise and speckle. Two basic approaches are investigated: (1) deriving the optimal matched filters for the signal-dependent noise models and comparing their performances with those derived for traditional signal-independent noise models and (2) first nonlinearly transforming the signal-dependent noise to signal-independent noise followed by the use of a classical filter matched to the transformed signal. We present both theoretical and computer simulation results that demonstrate the generally superior performance of the second approach in terms of the correlation peak signal-to-noise ratio.

Downie, John D.; Walkup, John F.

1994-01-01

94

Random set particle filter for bearings-only multitarget tracking

NASA Astrophysics Data System (ADS)

The random set approach to multitarget tracking is a theoretically sound framework that covers joint estimation of the number of targets and the state of the targets. This paper describes a particle filter implementation of the random set multitarget filter. The contribution of this paper to the random set tracking framework is the formulation of a measurement model where each sensor report is assumed to contain at most one measurement. The implemented filter was tested in synthetic bearings-only tracking scenarios containing up to two targets in the presence of false alarms and missed measurements. The estimated target state consisted of 2D position and velocity components. The filter was capable to track the targets fairly well despite of the missing measurements and the relatively high false alarm rates. In addition, the filter showed robustness against wrong parameter values of false alarm rates. The results that were obtained during the limited tests of the filter show that the random set framework has potential for challenging tracking situations. On the other hand, the computational burden of the described implementation is quite high and increases approximately linearly with respect to the expected number of targets.

Vihola, Matti

2005-05-01

95

Na-Faraday rotation filtering: the optimal point.

Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

Kiefer, Wilhelm; Lw, Robert; Wrachtrup, Jrg; Gerhardt, Ilja

2014-01-01

96

Na-Faraday rotation filtering: The optimal point

Narrow-band optical filtering is required in many spectroscopy applications to suppress unwanted background light. One example is quantum communication where the fidelity is often limited by the performance of the optical filters. This limitation can be circumvented by utilizing the GHz-wide features of a Doppler broadened atomic gas. The anomalous dispersion of atomic vapours enables spectral filtering. These, so-called, Faraday anomalous dispersion optical filters (FADOFs) can be by far better than any commercial filter in terms of bandwidth, transition edge and peak transmission. We present a theoretical and experimental study on the transmission properties of a sodium vapour based FADOF with the aim to find the best combination of optical rotation and intrinsic loss. The relevant parameters, such as magnetic field, temperature, the related optical depth, and polarization state are discussed. The non-trivial interplay of these quantities defines the net performance of the filter. We determine analytically the optimal working conditions, such as transmission and the signal to background ratio and validate the results experimentally. We find a single global optimum for one specific optical path length of the filter. This can now be applied to spectroscopy, guide star applications, or sensing. PMID:25298251

Kiefer, Wilhelm; Lw, Robert; Wrachtrup, Jrg; Gerhardt, Ilja

2014-01-01

97

Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement

NASA Astrophysics Data System (ADS)

In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.

Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.

98

Opdic (optimized Peak, Distortion and Clutter) Detection Filter.

NASA Astrophysics Data System (ADS)

Detection is considered. This involves determining regions of interest (ROIs) in a scene: the locations of multiple object classes in a scene in clutter when object distortions and contrast differences are present. High probability of detection P_{D} is essential and low P_{FA } is desirable since subsequent stages in the full system will only decrease P_{FA } and cannot increase P_{D }. Low resolution blob objects and objects with more internal detail are considered with both 3-D aspect view and depression angle distortions present. Extensive tests were conducted on 56 scenes with object classes not present in the training set. A modified MINACE (Minimum Noise and Correlation Energy) distortion-invariant filter was used. This minimizes correlation plane energy due to distortions and clutter while satisfying correlation peak constraint values for various object-aspect views. The filter was modified with a new object model (to give predictable output peak values) and a new correlated noise clutter model; a white Gaussian noise model of distortion was used; and a new techniques to increase the number of training set images (N _{T}) included in the filter were developed. Excellent results were obtained. However, the correlation plane distortion and clutter energy functions were found to become worse as N_{T } was increased and no rigorous method exists to select the best N_{T} (when to stop filter synthesis). A new OPDIC (Optimized Peak, Distortion, and Clutter) filter was thus devised. This filter retained the new object, clutter and distortion models noted. It minimizes the variance of the correlation peak values for all training set images (not just the N_{T} images). As N _{T} increases, the peak variance and the objective functions (correlation plane distortion and clutter energy) are all minimized. Thus, this new filter optimizes the desired functions and provides an easy way to stop filter synthesis (when the objective function is minimized). Tests show excellent detection results and confirm its advantageous properties.

House, Gregory Philip

1995-01-01

99

A Kalman-Particle Kernel Filter and its Application to Terrain Navigation

. Keywords: Kalman filter, kernel density estimator, regularized particle filter, Inertial navigation SystemA Kalman-Particle Kernel Filter and its Application to Terrain Navigation Dinh-Tuan Pham causes undesirable Monte Carlo fluctuations. This new filter is applied to terrain navigation, which

Del Moral , Pierre

100

Evolutionary Optimization Versus Particle Swarm Optimization: Philosophy and Performance Differences

This paper investigates the philosophical and performance differences of particle swarm and evolutionary optimization. The method of processing employed in each technique are first reviewed followed by a summary of their philosophical differences. Comparison experiments involving four non-linear functions well studied in the evolutionary optimization literature are used to highlight some performance differences between the techniques.

Peter J. Angeline

1998-01-01

101

Distributed Soft-Data-Constrained Multi-Model Particle Filter.

A distributed nonlinear estimation method based on soft-data-constrained multimodel particle filtering and applicable to a number of distributed state estimation problems is proposed. This method needs only local data exchange among neighboring sensor nodes and thus provides enhanced reliability, scalability, and ease of deployment. To make the multimodel particle filtering work in a distributed manner, a Gaussian approximation of the particle cloud obtained at each sensor node and a consensus propagation-based distributed data aggregation scheme are used to dynamically reweight the particles' weights. The proposed method can recover from failure situations and is robust to noise, since it keeps the same population of particles and uses the aggregated global Gaussian to infer constraints. The constraints are enforced by adjusting particles' weights and assigning a higher mass to those closer to the global estimate represented by the nodes in the entire sensor network after each communication step. Each sensor node experiences gradual change; i.e., if a noise occurs in the system, the node, its neighbors, and consequently the overall network are less affected than with other approaches, and thus recover faster. The efficiency of the proposed method is verified through extensive simulations for a target tracking system which can process both soft and hard data in sensor networks. PMID:24956539

Seifzadeh, Sepideh; Khaleghi, Bahador; Karray, Fakhri

2014-06-16

102

Fuzzy membership function optimization for system identification using an extended Kalman filter

Fuzzy membership function optimization for system identification using an extended Kalman filter an extended Kalman filter to optimize the membership functions for system modeling, or system identification is that the proposed system acts as a noise-reducing filter. We demonstrate that the extended Kalman filter can

Simon, Dan

103

Improved particle swarm optimization combined with chaos

As a novel optimization technique, chaos has gained much attention and some applications during the past decade. For a given energy or cost function, by following chaotic ergodic orbits, a chaotic dynamic system may eventually reach the global optimum or its good approximation with high probability. To enhance the performance of particle swarm optimization (PSO), which is an evolutionary computation

Bo Liu; Ling Wang; Yi-Hui Jin; Fang Tang; De-Xian Huang

2005-01-01

104

NASA Astrophysics Data System (ADS)

In this study, genetic resampling (GRS) approach is utilized for precise orbit determination (POD) using the batch filter based on particle filtering (PF). Two genetic operations, which are arithmetic crossover and residual mutation, are used for GRS of the batch filter based on PF (PF batch filter). For POD, Laser-ranging Precise Orbit Determination System (LPODS) and satellite laser ranging (SLR) observations of the CHAMP satellite are used. Monte Carlo trials for POD are performed by one hundred times. The characteristics of the POD results by PF batch filter with GRS are compared with those of a PF batch filter with minimum residual resampling (MRRS). The post-fit residual, 3D error by external orbit comparison, and POD repeatability are analyzed for orbit quality assessments. The POD results are externally checked by NASA JPLs orbits using totally different software, measurements, and techniques. For post-fit residuals and 3D errors, both MRRS and GRS give accurate estimation results whose mean root mean square (RMS) values are at a level of 5 cm and 10-13 cm, respectively. The mean radial orbit errors of both methods are at a level of 5 cm. For POD repeatability represented as the standard deviations of post-fit residuals and 3D errors by repetitive PODs, however, GRS yields 25% and 13% more robust estimation results than MRRS for post-fit residual and 3D error, respectively. This study shows that PF batch filter with GRS approach using genetic operations is superior to PF batch filter with MRRS in terms of robustness in POD with SLR observations.

Kim, Young-Rok; Park, Eunseo; Choi, Eun-Jung; Park, Sang-Young; Park, Chandeok; Lim, Hyung-Chul

2014-09-01

105

GaN nanostructure design for optimal dislocation filtering

NASA Astrophysics Data System (ADS)

The effect of image forces in GaN pyramidal nanorod structures is investigated to develop dislocation-free light emitting diodes (LEDs). A model based on the eigenstrain method and nonlocal stress is developed to demonstrate that the pyramidal nanorod efficiently ejects dislocations out of the structure. Two possible regimes of filtering behavior are found: (1) cap-dominated and (2) base-dominated. The cap-dominated regime is shown to be the more effective filtering mechanism. Optimal ranges of fabrication parameters that favor a dislocation-free LED are predicted and corroborated by resorting to available experimental evidence. The filtering probability is summarized as a function of practical processing parameters: the nanorod radius and height. The results suggest an optimal nanorod geometry with a radius of 50b (26 nm) and a height of 125b (65 nm), in which b is the magnitude of the Burgers vector for the GaN system studied. A filtering probability of greater than 95% is predicted for the optimal geometry.

Liang, Zhiwen; Colby, Robert; Wildeson, Isaac H.; Ewoldt, David A.; Sands, Timothy D.; Stach, Eric A.; Garca, R. Edwin

2010-10-01

106

FIR filter optimization for video processing on FPGAs

NASA Astrophysics Data System (ADS)

Two-dimensional finite impulse response (FIR) filters are an important component in many image and video processing systems. The processing of complex video applications in real time requires high computational power, which can be provided using field programmable gate arrays (FPGAs) due to their inherent parallelism. The most resource-intensive components in computing FIR filters are the multiplications of the folding operation. This work proposes two optimization techniques for high-speed implementations of the required multiplications with the least possible number of FPGA components. Both methods use integer linear programming formulations which can be optimally solved by standard solvers. In the first method, a formulation for the pipelined multiple constant multiplication problem is presented. In the second method, also multiplication structures based on look-up tables are taken into account. Due to the low coefficient word size in video processing filters of typically 8 to 12 bits, an optimal solution is found for most of the filters in the benchmark used. A complexity reduction of 8.5% for a Xilinx Virtex 6 FPGA could be achieved compared to state-of-the-art heuristics.

Kumm, Martin; Fanghnel, Diana; Mller, Konrad; Zipf, Peter; Meyer-Baese, Uwe

2013-12-01

107

Using selection to improve particle swarm optimization

This paper describes a evolutionary optimization algorithm that is a hybrid based on the particle swarm algorithm but with the addition of a standard selection mechanism from evolutionary computations. A comparison is performed between the hybrid swarm and the ordinary particle swarm that shows selection to provide an advantage for some (but not all) complex functions

Peter J. Angeline

1998-01-01

108

Measurement of particle sulfate from micro-aethalometer filters

NASA Astrophysics Data System (ADS)

The micro-aethalometer (AE51) was designed for high time resolution black carbon (BC) measurements and the process collects particles on a filter inside the instrument. Here we examine the potential for saving these filters for subsequent sulfate (SO42-) measurement. For this purpose, a series lab and field blanks were analyzed to characterize blank levels and variability and then collocated 24-h aerosol sampling was conducted in Beijing with the AE51 and a dual-channel filterpack sampler that collects fine particles (PM2.5). AE51 filters and the filters from the filterpacks sampled for 24 h were extracted with ultrapure water and then analyzed by Ion Chromatography (IC) to determine integrated SO42- concentration. Blank corrections were essential and the estimated detection limit for 24 h AE51 sampling of SO42- was estimated to be 1.4 ?g/m3. The SO42- measured from the AE51 based upon blank corrections using batch-average field blank SO42- values was found to be in reasonable agreement with the filterpack results (R2 > 0.87, slope = 1.02) indicating that it is possible to determine both BC and SO42- concentrations using the AE51 in Beijing. This result suggests that future comparison of the relative health impacts of BC and SO42- could be possible when the AE51 is used for personal exposure measurement.

Wang, Qingqing; Yang, Fumo; Wei, Lianfang; Zheng, Guangjie; Fan, Zhongjie; Rajagopalan, Sanjay; Brook, Robert D.; Duan, Fengkui; He, Kebin; Sun, Yele; Brook, Jeffrey R.

2014-10-01

109

Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30?Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30?Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

2014-01-01

110

Marginalized Particle Filter for Blind Signal Detection with Analog Imperfections

NASA Astrophysics Data System (ADS)

Recently, the marginalized particle filter (MPF) has been applied to blind symbol detection problems over selective fading channels. The MPF can ease the computational burden of the standard particle filter (PF) while offering better estimates compared with the standard PF. In this paper, we investigate the application of the blind MPF detector to more realistic situations where the systems suffer from analog imperfections which are non-linear signal distortion due to the inaccurate analog circuits in wireless devices. By reformulating the system model using the widely linear representation and employing the auxiliary variable resampling (AVR) technique for estimation of the imperfections, the blind MPF detector is successfully modified to cope with the analog imperfections. The effectiveness of the proposed MPF detector is demonstrated via computer simulations.

Yoshida, Yuki; Hayashi, Kazunori; Sakai, Hideaki; Bocquet, Wladimir

111

Accelerating Particle Filter using Randomized Multiscale and Fast Multipole Type Methods

1 Accelerating Particle Filter using Randomized Multiscale and Fast Multipole Type Methods Gil that accelerates the computation of particle filters. Unlike the conventional way, which calculates weights over Shabat, Yaniv Shmueli, Amit Bermanis and Amir Averbuch Abstract--Particle filter is a powerful method

Averbuch, Amir

112

Optimal Noise Filtering in the Chemotactic Response of Escherichia coli

Information-carrying signals in the real world are often obscured by noise. A challenge for any system is to filter the signal from the corrupting noise. This task is particularly acute for the signal transduction network that mediates bacterial chemotaxis, because the signals are subtle, the noise arising from stochastic fluctuations is substantial, and the system is effectively acting as a differentiator which amplifies noise. Here, we investigated the filtering properties of this biological system. Through simulation, we first show that the cutoff frequency has a dramatic effect on the chemotactic efficiency of the cell. Then, using a mathematical model to describe the signal, noise, and system, we formulated and solved an optimal filtering problem to determine the cutoff frequency that bests separates the low-frequency signal from the high-frequency noise. There was good agreement between the theory, simulations, and published experimental data. Finally, we propose that an elegant implementation of the optimal filter in combination with a differentiator can be achieved via an integral control system. This paper furnishes a simple quantitative framework for interpreting many of the key notions about bacterial chemotaxis, and, more generally, it highlights the constraints on biological systems imposed by noise. PMID:17112312

Andrews, Burton W; Yi, Tau-Mu; Iglesias, Pablo A

2006-01-01

113

NASA Astrophysics Data System (ADS)

Ionosphere modeling is an important field of current studies because of its influences on the propagation of the electromagnetic signals. Among the various methods of obtaining ionospheric information, Global Positioning System (GPS) is the most prominent one because of extensive stations which are distributed all over the world. There are several studies in the literature related to the modeling of the ionosphere in terms of Total Electron Content (TEC). However, most of these studies investigate the ionosphere in the global and regional scales. On the other hand, complex dynamic of the ionosphere requires further studies in the local structure of the TEC distribution. In this work, Particle filter has been used for the investigation of the local character of the ionospheric Vertical Total Electron Content (VTEC). The GPS data of 29 ground based GPS stations, belonging to International GNSS Service (IGS) and Reference Frame Sub-commission for Europe (EUREF), for Europe have been used in this study. The data acquisition time is 18 February 2011 and the data is affected by the 15 February geomagnetic storm. In the preprocessing step, the observations of each satellite are examined for any possible cycle slip and also geometry-free linear combination of the observables are calculated for each continuous arc. Then, Pseudorange observations smoothed with the carrier to code leveling method. Particle filter is used for near-real time estimation of the VTEC and of the combined satellite and receiver biases. The Particle filter is implemented by recursively generating a set of weighted samples of the state variables. This filter has a flexible nature which can be more adaptive to some characteristics of the high dynamic systems. Besides, standard Kalman filter as an effective method for optimal state estimation is applied to the same data sets to compare the corresponding results with results of Particle filter. The comparison shows that Particle filter indicates better performance than the standard Kalman filter especially during the geomagnetic storm. Keywords: ionosphere, GPS, Kalman filter, Particle filer

Onur Karsl?o?lu, Mahmut; Aghakarimi, Armin

2013-04-01

114

NASA Astrophysics Data System (ADS)

A general sequential Monte Carlo method, particularly a general particle filter, attracts much attention in prognostics recently because it is able to on-line estimate posterior probability density functions of the state functions used in a state space model without making restrictive assumptions. In this paper, the general particle filter is introduced to optimize a wavelet filter for extracting bearing fault features. The major innovation of this paper is that a joint posterior probability density function of wavelet parameters is represented by a set of random particles with their associated weights, which is seldom reported. Once the joint posterior probability density function of wavelet parameters is derived, the approximately optimal center frequency and bandwidth can be determined and be used to perform an optimal wavelet filtering for extracting bearing fault features. Two case studies are investigated to illustrate the effectiveness of the proposed method. The results show that the proposed method provides a Bayesian approach to extract bearing fault features. Additionally, the proposed method can be generalized by using different wavelet functions and metrics and be applied more widely to any other situation in which the optimal wavelet filtering is required.

Wang, Dong; Sun, Shilong; Tse, Peter W.

2015-02-01

115

Ridge filter design for a particle therapy line

NASA Astrophysics Data System (ADS)

The beam irradiation system for particle therapy can use a passive or an active beam irradiation method. In the case of an active beam irradiation, using a ridge filter would be appropriate to generate a spread-out Bragg peak (SOBP) through a large scanning area. For this study, a ridge filter was designed as an energy modulation device for a prototype active scanning system at MC-50 in Korea Institute of Radiological And Medical Science (KIRAMS). The ridge filter was designed to create a 10 mm of SOBP for a 45-MeV proton beam. To reduce the distal penumbra and the initial dose, [DM] determined the weighting factor for Bragg Peak by applying an in-house iteration code and the Minuit Fit package of Root. A single ridge bar shape and its corresponding thickness were obtained through 21 weighting factors. Also, a ridge filter was fabricated to cover a large scanning area (300 300 mm2) by Polymethyl Methacrylate (PMMA). The fabricated ridge filter was tested at the prototype active beamline of MC-50. The SOBP and the incident beam distribution were obtained by using HD-810 GaF chromatic film placed at a right triangle to the PMMA block. The depth dose profile for the SOBP can be obtained precisely by using the flat field correction and measuring the 2-dimensional distribution of the incoming beam. After the flat field correction is used, the experimental results show that the SOBP region matches with design requirement well, with 0.62% uniformity.

Kim, Chang Hyeuk; Han, Garam; Lee, Hwa-Ryun; Kim, Hyunyong; Jang, Hong Suk; Kim, Jeong Hwan; Park, Dong Wook; Jang, Sea Duk; Hwang, Won Taek; Kim, Geun-Beom; Yang, Tae-Keun

2014-05-01

116

Independent motion detection with a rival penalized adaptive particle filter

NASA Astrophysics Data System (ADS)

Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.

Becker, Stefan; Hbner, Wolfgang; Arens, Michael

2014-10-01

117

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Adapting the Sample Size in Particle Filters Through KLD-Sampling Dieter Fox Department of Computer filters by adapting the size of sample sets during the estimation pro- cess. The key idea of the KLD-sampling improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation

Washington at SeattleUniversity of

118

The new approach for infrared target tracking based on the particle filter algorithm

NASA Astrophysics Data System (ADS)

Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.

Sun, Hang; Han, Hong-xia

2011-08-01

119

A multi-dimensional procedure for BNCT filter optimization

An initial version of an optimization code utilizing two-dimensional radiation transport methods has been completed. This code is capable of predicting material compositions of a beam tube-filter geometry which can be used in a boron neutron capture therapy treatment facility to improve the ratio of the average radiation dose in a brain tumor to that in the healthy tissue surrounding the tumor. The optimization algorithm employed by the code is very straightforward. After an estimate of the gradient of the dose ratio with respect to the nuclide densities in the beam tube-filter geometry is obtained, changes in the nuclide densities are made based on: (1) the magnitude and sign of the components of the dose ratio gradient, (2) the magnitude of the nuclide densities, (3) the upper and lower bound of each nuclide density, and (4) the linear constraint that the sum of the nuclide density fractions in each material zone be less than or equal to 1.0. A local optimal solution is assumed to be found when one of the following conditions is satisfied in every material zone: (1) the maximum positive component of the gradient corresponds to a nuclide at its maximum density and the sum of the density fractions equals 1.0 or, and (2) the positive and negative components of the gradient correspond to nuclides densities at their upper and lower bounds, respectively, and the remaining components of the gradient are sufficiently small. The optimization procedure has been applied to a beam tube-filter geometry coupled to a simple tumor-patient head model and an improvement of 50% in the dose ratio was obtained.

Lille, R.A.

1998-02-01

120

Design optimization of volume holographic gratings for wavelength filters

NASA Astrophysics Data System (ADS)

Volume holography is promising for devices such as wavelength filters. However, in previously reported work with these holographic devices the diffraction efficiency and wavelength selectivity were not so satisfactory, which affected the insertion loss and channel spacing of the device respectively. In order to investigate the performances for most of the volume holographic devices which are of finite size and with 90 degree geometry, two-dimensional (2-D) coupled-wave theory is more accurate than that based on the well-known Kogelnik"s coupled-wave theory. In this paper a close-form analytical solution to 2-D coupled wave theory for 2-D restricted gratings is presented firstly. Then in order to achieve the optimum insertion loss and channel spacing for dense wavelength division multiplexing (DWDM) filters, diffraction properties, especially effects of the grating strength and grating size ratio on the peak diffraction efficiency and wavelength selectivity are researched based on the 2-D coupled-wave theory and its solution. The results show that this solution is capable of design optimization of volume holographic gratings for various devices, including wavelength filters. And the design optimization is given in order to gain the optimum peak diffraction efficiency and wavelength selectivity. Finally, some experimental results showing the angular selectivity for different grating size ratio are given, which agree well with the 2-D coupled-wave theory.

Wang, Bo; Chang, Liang; Tao, Shiquan

2005-02-01

121

Simulation-based optimal filter for maneuvering target tracking

NASA Astrophysics Data System (ADS)

While single model filters are sufficient for tracking targets having fixed kinematic behavior, maneuvering targets require the use of multiple models. Jump Markov linear systems whose parameters evolve with time according to a finite state-space Markov chain, have been used in these situations with great success. However, it is well-known that performing optimal estimation for JMLS involves a prohibitive computational cost exponential in the number of observations. Many approximate methods have been proposed in the literature to circumvent this including the well-known GPB and IMM algorithms. These methods are computationally cheap but at the cost of being suboptimal. Efficient off- line methods have recently been proposed based on Markov chain Monte Carlo algorithms that out-perform recent methods based on the Expectation-Maximization algorithms. However, realistic tracking systems need on-line techniques. In this paper, we propose an original on-line Monte Carlo filtering algorithm to perform optimal state estimation of JMLS. The approach taken is loosely based on the bootstrap filter which, wile begin a powerful general algorithm in its original form, does not make the most of the structure of JMLS. The proposed algorithm exploits this structure and leads to a significant performance improvement.

Doucet, Arnaud; Gordon, Neil J.

1999-10-01

122

Optimal initial perturbations for El Nino ensemble prediction with ensemble Kalman filter

Optimal initial perturbations for El Nino ensemble prediction with ensemble Kalman filter Yoo of an ensemble Kalman filter (EnKF). Among the initial conditions gene- rated by EnKF, ensemble members with fast. Keywords Ensemble Kalman filter Á Seasonal prediction Á Optimal initial perturbation Á Ensemble prediction

Kang, In-Sik

123

Optimizing Automated Particle Analysis for Forensic Applications

Optimizing Automated Particle Analysis for Forensic Applications Nicholas W. M. Ritchie Materials Material science, forensics, manufacturing, #12;Our Tools Instruments 2 electron microprobes, 2 FIBS, 2 Measurement Science Division Materials Measurement Lab 29-Nov-2012 #12;The Big Picture X-ray Microanalysis

Perkins, Richard A.

124

Adaptive Object Tracking using Particle Swarm Optimization

This paper presents an automatic object detection and tracking algorithm by using particle swarm optimization (PSO) based method, which is a searching algorithm inspired by the behaviors of social insect in the nature. A cascade of boosted classifiers based on Haar-like features is trained and employed to detect objects. To improve the searching efficiency, first the object model is projected

Yuhua Zheng; Yan Meng

2007-01-01

125

Loss of Fine Particle Ammonium from Denuded Nylon Filters

Ammonium is an important constituent of fine particulate mass in the atmosphere, but can be difficult to quantify due to possible sampling artifacts. Losses of semivolatile species such as NH4NO3 can be particularly problematic. In order to evaluate ammonium losses from aerosol particles collected on filters, a series of field experiments was conducted using denuded nylon and Teflon filters at Bondville, Illinois (February 2003), San Gorgonio, California (April 2003 and July 2004), Grand Canyon National Park, Arizona (May, 2003), Brigantine, New Jersey (November 2003), and Great Smoky Mountains National Park (NP), Tennessee (JulyAugust 2004). Samples were collected over 24-hr periods. Losses from denuded nylon filters ranged from 10% (monthly average) in Bondville, Illinois to 28% in San Gorgonio, California in summer. Losses on individual sample days ranged from 1% to 65%. Losses tended to increase with increasing diurnal temperature and relative humidity changes and with the fraction of ambient total N(--III) (particulate NH4+ plus gaseous NH3) present as gaseous NH3. The amount of ammonium lost at most sites could be explained by the amount of NH4NO3 present in the sampled aerosol. Ammonium losses at Great Smoky Mountains NP, however, significantly exceeded the amount of NH4NO3 collected. Ammoniated organic salts are suggested as additional important contributors to observed ammonium loss at this location.

Yu, Xiao-Ying; Lee, Taehyoung; Ayres, Benjamin; Kreidenweis, Sonia M.; Malm, William C.; Collett, Jeffrey L.

2006-08-01

126

Tracking low SNR targets using particle filter with flow control

NASA Astrophysics Data System (ADS)

In this work we study the problem of detecting and tracking challenging targets that exhibit low signal-to-noise ratios (SNR). We have developed a particle filter-based track-before-detect (TBD) algorithm for tracking such dim targets. The approach incorporates the most recent state estimates to control the particle flow accounting for target dynamics. The flow control enables accumulation of signal information over time to compensate for target motion. The performance of this approach is evaluated using a sensitivity analysis based on varying target speed and SNR values. This analysis was conducted using high-fidelity sensor and target modeling in realistic scenarios. Our results show that the proposed TBD algorithm is capable of tracking targets in cluttered images with SNR values much less than one.

Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.

2014-06-01

127

Nonlinear EEG Decoding Based on a Particle Filter Model

While the world is stepping into the aging society, rehabilitation robots play a more and more important role in terms of both rehabilitation treatment and nursing of the patients with neurological diseases. Benefiting from the abundant contents of movement information, electroencephalography (EEG) has become a promising information source for rehabilitation robots control. Although the multiple linear regression model was used as the decoding model of EEG signals in some researches, it has been considered that it cannot reflect the nonlinear components of EEG signals. In order to overcome this shortcoming, we propose a nonlinear decoding model, the particle filter model. Two- and three-dimensional decoding experiments were performed to test the validity of this model. In decoding accuracy, the results are comparable to those of the multiple linear regression model and previous EEG studies. In addition, the particle filter model uses less training data and more frequency information than the multiple linear regression model, which shows the potential of nonlinear decoding models. Overall, the findings hold promise for the furtherance of EEG-based rehabilitation robots. PMID:24949420

Hong, Jun

2014-01-01

128

Symmetric Phase-Only Filtering in Particle-Image Velocimetry

NASA Technical Reports Server (NTRS)

Symmetrical phase-only filtering (SPOF) can be exploited to obtain substantial improvements in the results of data processing in particle-image velocimetry (PIV). In comparison with traditional PIV data processing, SPOF PIV data processing yields narrower and larger amplitude correlation peaks, thereby providing more-accurate velocity estimates. The higher signal-to-noise ratios associated with the higher amplitude correlation peaks afford greater robustness and reliability of processing. SPOF also affords superior performance in the presence of surface flare light and/or background light. SPOF algorithms can readily be incorporated into pre-existing algorithms used to process digitized image data in PIV, without significantly increasing processing times. A summary of PIV and traditional PIV data processing is prerequisite to a meaningful description of SPOF PIV processing. In PIV, a pulsed laser is used to illuminate a substantially planar region of a flowing fluid in which particles are entrained. An electronic camera records digital images of the particles at two instants of time. The components of velocity of the fluid in the illuminated plane can be obtained by determining the displacements of particles between the two illumination pulses. The objective in PIV data processing is to compute the particle displacements from the digital image data. In traditional PIV data processing, to which the present innovation applies, the two images are divided into a grid of subregions and the displacements determined from cross-correlations between the corresponding sub-regions in the first and second images. The cross-correlation process begins with the calculation of the Fourier transforms (or fast Fourier transforms) of the subregion portions of the images. The Fourier transforms from the corresponding subregions are multiplied, and this product is inverse Fourier transformed, yielding the cross-correlation intensity distribution. The average displacement of the particles across a subregion results in a displacement of the correlation peak from the center of the correlation plane. The velocity is then computed from the displacement of the correlation peak and the time between the recording of the two images. The process as described thus far is performed for all the subregions. The resulting set of velocities in grid cells amounts to a velocity vector map of the flow field recorded on the image plane. In traditional PIV processing, surface flare light and bright background light give rise to a large, broad correlation peak, at the center of the correlation plane, that can overwhelm the true particle- displacement correlation peak. This has made it necessary to resort to tedious image-masking and background-subtraction procedures to recover the relatively small amplitude particle-displacement correlation peak. SPOF is a variant of phase-only filtering (POF), which, in turn, is a variant of matched spatial filtering (MSF). In MSF, one projects a first image (denoted the input image) onto a second image (denoted the filter) as part of a computation to determine how much and what part of the filter is present in the input image. MSF is equivalent to cross-correlation. In POF, the frequency-domain content of the MSF filter is modified to produce a unitamplitude (phase-only) object. POF is implemented by normalizing the Fourier transform of the filter by its magnitude. The advantage of POFs is that they yield correlation peaks that are sharper and have higher signal-to-noise ratios than those obtained through traditional MSF. In the SPOF, these benefits of POF can be extended to PIV data processing. The SPOF yields even better performance than the POF approach, which is uniquely applicable to PIV type image data. In SPOF as now applied to PIV data processing, a subregion of the first image is treated as the input image and the corresponding subregion of the second image is treated as the filter. The Fourier transforms from both the firs and second- image subregions are normalized by the square roots of their respective magnitudes.

Wemet, Mark P.

2008-01-01

129

Solving constrained optimization problems with hybrid particle swarm optimization

NASA Astrophysics Data System (ADS)

Constrained optimization problems (COPs) are very important in that they frequently appear in the real world. A COP, in which both the function and constraints may be nonlinear, consists of the optimization of a function subject to constraints. Constraint handling is one of the major concerns when solving COPs with particle swarm optimization (PSO) combined with the Nelder-Mead simplex search method (NM-PSO). This article proposes embedded constraint handling methods, which include the gradient repair method and constraint fitness priority-based ranking method, as a special operator in NM-PSO for dealing with constraints. Experiments using 13 benchmark problems are explained and the NM-PSO results are compared with the best known solutions reported in the literature. Comparison with three different meta-heuristics demonstrates that NM-PSO with the embedded constraint operator is extremely effective and efficient at locating optimal solutions.

Zahara, Erwie; Hu, Chia-Hsin

2008-11-01

130

Carbon nanotube based photon filter for energetic particle detection

NASA Astrophysics Data System (ADS)

Energetic particles (EP) ejected from a plasma carry important information about the plasma physics. To study remote plasmas in the heliosphere, space-based sensors must be used. Furthermore, only energetic neutral atoms (ENAs) can be analyzed, since charged particle trajectories are curved by the electric and magnetic fields of the heliosphere. Because low power consumption and weight are important for spacecraft, solid-state detectors are used. The challenge with solid-state detectors is their sensitivity to light; in all observational regions of interest, photon counts are several orders of magnitude higher than ENA counts. Current state of the art solid-state detectors use ultra-thin metal or carbon films to block the photons. This sets an energy threshold for the ENAs due to the fact that the ENAs have to penetrate this film. We aim to replace the thin films with carbon nanotube (CNT) mats. The CNT mats have a much lower density while maintaining extremely high photon absorption. Thus the CNT mats will act as an excellent filter for blocking the photons while minimally affecting the ENAs of interest. We will describe the fabrication of the CNT mats and their performance characterization by optical spectroscopy and energetic particle spectroscopy using alpha particles as an ENA simulant.

Deglau, David; Papadakis, Stergios; Monica, Andrew; Andrews, Bruce; Mitchell, Donald

2013-03-01

131

Optimally designed narrowband guided-mode resonance reflectance filters for mid-infrared

Optimally designed narrowband guided-mode resonance reflectance filters for mid must be taken into consideration rigorously for accurate design of narrowband GMR filters discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband

Cunningham, Brian

132

Efficient Design of Cosine-Modulated Filter Banks via Convex Optimization

Thispaperpresentsefficientapproachesfordesigning cosine-modulated filter banks with linear phase prototype filter. First, we show that the design problem of the prototype filter being a spectral factor of th-band filter is a nonconvex optimization problem with low degree of nonconvexity. As a result, the non- convex optimization problem can be cast into a semi-definite pro- gramming (SDP) problem by a convex relaxation technique.

Ha Hoang Kha; Hoang Duong Tuan; Truong Q. Nguyen

2009-01-01

133

Unit Commitment by Adaptive Particle Swarm Optimization

NASA Astrophysics Data System (ADS)

This paper presents an Adaptive Particle Swarm Optimization (APSO) for Unit Commitment (UC) problem. APSO reliably and accurately tracks a continuously changing solution. By analyzing the social model of standard PSO for the UC problem of variable size and load demand, adaptive criteria are applied on PSO parameters and the global best particle (knowledge) based on the diversity of fitness. In this proposed method, PSO parameters are automatically adjusted using Gaussian modification. To increase the knowledge, the global best particle is updated instead of a fixed one in each generation. To avoid the method to be frozen, idle particles are reset. The real velocity is digitized (0/1) by a logistic function for binary UC. Finally, the benchmark data and methods are used to show the effectiveness of the proposed method.

Saber, Ahmed Yousuf; Senjyu, Tomonobu; Miyagi, Tsukasa; Urasaki, Naomitsu; Funabashi, Toshihisa

134

Optimally stabilized PET image denoising using trilateral filtering.

Low-resolution and signal-dependent noise distribution in positron emission tomography (PET) images makes denoising process an inevitable step prior to qualitative and quantitative image analysis tasks. Conventional PET denoising methods either over-smooth small-sized structures due to resolution limitation or make incorrect assumptions about the noise characteristics. Therefore, clinically important quantitative information may be corrupted. To address these challenges, we introduced a novel approach to remove signal-dependent noise in the PET images where the noise distribution was considered as Poisson-Gaussian mixed. Meanwhile, the generalized Anscombe's transformation (GAT) was used to stabilize varying nature of the PET noise. Other than noise stabilization, it is also desirable for the noise removal filter to preserve the boundaries of the structures while smoothing the noisy regions. Indeed, it is important to avoid significant loss of quantitative information such as standard uptake value (SUV)-based metrics as well as metabolic lesion volume. To satisfy all these properties, we extended bilateral filtering method into trilateral filtering through multiscaling and optimal Gaussianization process. The proposed method was tested on more than 50 PET-CT images from various patients having different cancers and achieved the superior performance compared to the widely used denoising techniques in the literature. PMID:25333110

Mansoor, Awais; Bagci, Ulas; Mollura, Daniel J

2014-01-01

135

Cluster-Structured Adaptive Particle Swarm Optimization

NASA Astrophysics Data System (ADS)

A new cluster-structured Particle Swarm Optimization (PSO) with interaction and diversity of parameters is proposed in this letter. After a swarm of PSO is divided into some sub-swarms (clusters), interactions between sub-swarms and diversity of PSO parameters are added so as to improve the search ability of PSO in the proposed cluster-structured PSO. The feasibility and the advantage of the proposed cluster-structured PSO are demonstrated through numerical simulations using two typical optimization test problems.

Yazawa, Kazuyuki; Motoki, Makoto; Ishigame, Atsushi; Yasuda, Keiichiro

136

MRL-filters: a general class of nonlinear systems and their optimal design for image processing

A class of morphological\\/rank\\/linear (MRL)-filters is presented as a general nonlinear tool for image processing. They consist of a linear combination between a morphological\\/rank filter and a linear filter. A gradient steepest descent method is proposed to optimally design these filters, using the averaged least mean squares (LMS) algorithm. The filter design is viewed as a learning process, and convergence

Lcio F. C. Pessoa; Petros Maragos

1998-01-01

137

Analytic design of optimal FIR narrow-band filters using Zolotarev polynomials

NASA Astrophysics Data System (ADS)

An analytic method for designing narrow-band FIR filters using Zolotarev polynomials, which are extensions of Chebyshev polynomials, is proposed. These filters are optimal in the Chebyshev sense. Bandpass and bandstop narrow-band filters, as well as low-pass and high-pass filters, can be designed by this method. The design procedure and related formulas are presented. Design examples are included to show the properties of these filters.

Chen, Xiangkun; Parks, Thomas W.

1986-11-01

138

Numerical simulation of DPF filter for selected regimes with deposited soot particles

NASA Astrophysics Data System (ADS)

For the purpose of accumulation of particulate matter from Diesel engine exhaust gas, particle filters are used (referred to as DPF or FAP filters in the automotive industry). However, the cost of these filters is quite high. As the emission limits become stricter, the requirements for PM collection are rising accordingly. Particulate matters are very dangerous for human health and these are not invisible for human eye. They can often cause various diseases of the respiratory tract, even what can cause lung cancer. Performed numerical simulations were used to analyze particle filter behavior under various operating modes. The simulations were especially focused on selected critical states of particle filter, when engine is switched to emergency regime. The aim was to prevent and avoid critical situations due the filter behavior understanding. The numerical simulations were based on experimental analysis of used diesel particle filters.

Lvi?ka, David; Kova?k, Petr

2012-04-01

139

Wet particle source identification and reduction using a new filter cleaning process

NASA Astrophysics Data System (ADS)

Wet particle reduction during filter installation and start-up aligns closely with initiatives to reduce both chemical consumption and preventative maintenance time. The present study focuses on the effects of filter materials cleanliness on wet particle defectivity through evaluation of filters that have been treated with a new enhanced cleaning process focused on organic compounds reduction. Little difference in filter performance is observed between the two filter types at a size detection threshold of 60 nm, while clear differences are observed at that of 26 nm. It can be suggested that organic compounds can be identified as a potential source of wet particles. Pall recommends filters that have been treated with the special cleaning process for applications with a critical defect size of less than 60 nm. Standard filter products are capable to satisfy wet particle defect performance criteria in less critical lithography applications.

Umeda, Toru; Morita, Akihiko; Shimizu, Hideki; Tsuzuki, Shuichi

2014-03-01

140

In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (?p), stopband error (?s), transition band error (?t), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. PMID:25034647

Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

2014-07-14

141

Object tracking with particle filter in UAV video

NASA Astrophysics Data System (ADS)

Aerial surveillance is a main functionality of UAV, which is realized via video camera. During the operations, the mission assigned targets always are the kinetic objects, such as people or vehicles. Therefore, object tracking is taken as the key techniques for UAV sensor payload. Two difficulties for UAV object tracking are dynamic background and hardly predicting target's motion. To solve the problems, it employed the particle filter in the research. Modeling the target by its characteristics, for instance, color features, it approximates the possibility density of target state with weighting sample sets, and the state vector contains position, motion vector and region parameters. The experiments demonstrate the effectiveness and robustness of the proposed method in UAV video tracking.

Yu, Wenshuai; Yin, Xiaodong; Chen, Bing; Xie, Jinhua

2013-10-01

142

Particle loading rates for HVAC filters, heat exchangers, and ducts Nomenclature

strategies to limit particle deposition. The predicted mass loading rates allow for the assessmentParticle loading rates for HVAC filters, heat exchangers, and ducts Nomenclature Afl surface area of particles (mg/m3 ) dp particle diameter (lm) Ec particle mass emission rate distribution func- tion

Siegel, Jeffrey

143

It is challenging to measure the finger's kinematics of underlying bones in vivo. This paper presents a new method of finger kinematics measurement, using a geometric finger model and several markers deliberately stuck on skin surface. Using a multiple-view camera system, the optimal motion parameters of finger model were estimated using the proposed mixture-prior particle filtering. This prior, consisting of model and marker information, avoids generating improper particles for achieving near real-time performance. This method was validated using a planar fluoroscopy system that worked simultaneously with photographic system. Ten male subjects with asymptomatic hands were investigated in experiments. The results showed that the kinematic parameters could be estimated more accurately by the proposed method than by using only markers. There was 20-40% reduction in skin artefacts achieved for finger flexion/extension. Thus, this profile system can be developed as a tool of reliable kinematics measurement with good applicability for hand rehabilitation. PMID:22225500

Chang, Cheung-Wen; Kuo, Li-Chieh; Jou, I-Ming; Su, Fong-Chin; Sun, Yung-Nien

2013-01-01

144

filtering methodology Introduction to the theory and some of the current challenges M´onica F. Bugallo -- The particle filtering methodology 1/78 #12;Introduction The PF philosophy Implementation of PF Challenges filtering methodology 2/78 #12;Introduction The PF philosophy Implementation of PF Challenges of PF

Dobigeon, Nicolas

145

Agglomerates and granules of nanoparticles as filter media for submicron particles

An experimental study on filtration of submicron solid and liquid aerosol particles by using a filter media composed of agglomerates or granules of nanoparticles is described. Fumed silica nanoagglomerates, carbon black granules, silica shells, activated carbon granules, glass beads and nanoporous hydrophobic aerogel were among the granular filter media tested and compared to a commercially available HEPA fiber-based filter. Other

Jose Quevedo; Gaurav Patel; Robert Pfeffer; Rajesh Dave

2008-01-01

146

Kalman filtering with unknown inputs via optimal state estimation of singular systems

1 Kalman filtering with unknown inputs via optimal state estimation of singular systems M. DAROUACH de Lorraine, 54400 COSNES ET ROMAIN, FRANCE A new method for designing a Kalman filter for linear the Kalman filter, it is generally assumed that all system parameters, noise covariances, and inputs

Paris-Sud XI, Université de

147

Tracking Football Player Movement From a Single Moving Camera Using Particle Filters

Tracking Football Player Movement From a Single Moving Camera Using Particle Filters Anthony Soccer, Tracking, Particle Filter Abstract This paper deals with the problem of tracking football players in a football match using data from a single mov- ing camera. Tracking footballers from a single video source

Demiris, Yiannis

148

Adapting the Sample Size in Particle Filters Through KLD-Sampling

Over the last years, particle filters have been applied with great success to a variety of state estimation problems. In this paper we present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets during the estimation pro- cess. The key idea of the KLD-sampling method is to bound the approximation error intro-

Dieter Fox

2003-01-01

149

Numerical analysis of particle distribution on multi-pipe ceramic candle filters

The particle distribution on the ceramic filter surface has great effect on filtration performance. The numerical simulation method is used to analyze the particle distribution near the filter surface under different operation conditions. The gas?solid two-phase flow field in the ceramic filter vessel was simulated using the Eulerian two-fluid model provided by FLUENT code. The user-defined function was loaded with

H. X. Li; B. G. Gao; Z. X. Tie; Z. J. Sun; F. H. Wang

2010-01-01

150

Numerical analysis of particle distribution on multi-pipe ceramic candle filters

The particle distribution on the ceramic filter surface has great effect on filtration performance. The numerical simulation method is used to analyze the particle distribution near the filter surface under different operation conditions. The gas\\/solid two-phase flow field in the ceramic filter vessel was simulated using the Eulerian two-fluid model provided by FLUENT code. The user-defined function was loaded with

H. X. Li; B. G. Gao; Z. X. Tie; Z. J. Sun; F. H. Wang

2010-01-01

151

For power generation with combined cycles or production of so called advanced materials by vapor phase synthesis particle separation at high temperatures is of crucial importance. There, systems working with rigid ceramic barrier filters are either of thermodynamical benefit to the process or essential for producing materials with certain properties. A hot gas filter test rig has been installed to investigate the influence of different parameters e.g. temperature, dust properties, filter media and filtration and regeneration conditions into particle separation at high temperatures. These tests were conducted both with commonly used filter candles and with filter discs made out of the same material. The filter disc is mounted at one side of the test rig. That is why both filters face the same raw gas conditions. The filter disc is flown through by a cross flow arrangement. This bases upon the conviction that for comparison of filtration characteristics of candles with filter discs or other model filters the structure of the dust cakes have to be equal. This way of conducting investigations into the influence of the above mentioned parameters on dust separation at high temperatures follows the new standard VDI 3926. There, test procedures for the characterization of filter media at ambient conditions are prescribed. The paper mainly focuses then on the influence of particle properties (e.g. stickiness etc.) upon the filtration and regeneration behavior of fly ashes with rigid ceramic filters.

Pilz, T. [Univ. of Karlsruhe (Germany). Inst. fuer Mechanische Verfahrenstechnik und Mechanik

1995-12-31

152

The spectrometric oil analysis(SOA) is an important technique for machine state monitoring, fault diagnosis and prognosis, and SOA based remaining useful life(RUL) prediction has an advantage of finding out the optimal maintenance strategy for machine system. Because the complexity of machine system, its health state degradation process can't be simply characterized by linear model, while particle filtering(PF) possesses obvious advantages over traditional Kalman filtering for dealing nonlinear and non-Gaussian system, the PF approach was applied to state forecasting by SOA, and the RUL prediction technique based on SOA and PF algorithm is proposed. In the prediction model, according to the estimating result of system's posterior probability, its prior probability distribution is realized, and the multi-step ahead prediction model based on PF algorithm is established. Finally, the practical SOA data of some engine was analyzed and forecasted by the above method, and the forecasting result was compared with that of traditional Kalman filtering method. The result fully shows the superiority and effectivity of the PMID:24369656

Sun, Lei; Jia, Yun-xian; Cai, Li-ying; Lin, Guo-yu; Zhao, Jin-song

2013-09-01

153

Optimized Analog Filter Designs With Flat Responses by Semidefinite Programming

Analog filters constitute indispensable components of analog circuits. Inspired by recent advances in digital filter design, this paper provides a flexible design for analog filters. All-pole fil- ters have maximally flat passband, so our design minimizes their passband distortion. Analogously, maximally flat filters have max- imally flat passband, so our design maximizes their stopband at- tenuation. Its particular cases provide

Nguyen Thien Hoang; Hoang Duong Tuan; Truong Q. Nguyen; Hung Gia Hoang

2009-01-01

154

On layout optimization of the microwave diplexor filter using genetic algorithms

An original application of genetics algorithms in the on layout optimization of the microwave filters is presented. Based on a resonant coupling irises topology, a Ka-band diplexor filter on silicon membrane substrate is tuned in order to improve its performances. The optimization process uses the numerical results given by Sonnet software and the overall process is piloted by a genetic

A. Takacs; A. Serbanescu; G. Leu; H. Aubert; P. Pons; T. Parra; R. Plana

2004-01-01

155

Optimization of astigmatic particle tracking velocimeters

NASA Astrophysics Data System (ADS)

Astigmatic particle tracking velocimetry (APTV) has been developed in the last years to measure the three-dimensional displacement of tracer particles using a single-camera view. The measurement principle relies on an astigmatic optical system that provides aberrated particle images with a characteristic elliptical shape univocally related to the corresponding particle depth position. Because of the precision of this method, this concept is well established for measuring and controlling the distance between a CD/DVD and the reading head. The optical arrangement of an APTV system essentially consists of a primary stigmatic optics (e.g., a microscope, or a camera objective) and an astigmatic optics, typically a cylindrical lens placed in front of the camera sensor. This paper focuses on the uncertainty of APTV in the depth direction. First, an approximated analytical model is derived and experimentally validated. From the model, a set of three non-dimensional parameters that are the most significant in the optimization of the APTV performance are identified. Finally, the effect of different parameter settings and calibration approaches are studied systematically using numerical Monte Carlo simulations. The results allow for the derivation of general criteria to minimize the overall error in APTV measurements and provide the basis for reliable uncertainty estimation for a wide range of applications.

Rossi, Massimiliano; Khler, Christian J.

2014-09-01

156

Multi-path light extinction approach for high efficiency filtered oil particle measurement

NASA Astrophysics Data System (ADS)

This work present a multi-pathlight extinction approach to determine the oil mist filter efficiency based on measuring the concentration and size distribution of oil particles. Light extinction spectrum(LES) technique was used to retrieve the oil particle size distribution and concentration. The multi-path measuring cell was designed to measure low concentration and fine particles after filtering. The path-length of the measuring cell calibrated as 200 cm. The results of oil particle size with oil mist filtering were obtained as D32 = 0.9?m. Cv=1.610-8.

Pengfei, Yin; Jun, Chen; Huinan, Yang; Lili, Liu; Xiaoshu, Cai

2014-04-01

157

Efficient particle filters for joint tracking and classification

NASA Astrophysics Data System (ADS)

Target tracking is usually performed using data from sensors such as radar, whilst the target identification task usually relies on information from sensors such as IFF, ESM or imagery. The differing nature of the data from these sensors has generally led to these two vital tasks being performed separately. However, it is clear that an experienced operator can observe behavior characteristics of targets and, in combination with knowledge and expectations of target type and likely activity, can more knowledgeably identify the target and robustly predict its track than any automatic process yet defined. Most trackers are designed to follow targets within a wide envelope of trajectories and are not designed to derive behavior characteristics or include them as part of their output. Thus, there is potential scope for both applying target type knowledge to improve the reliability of the tracking process, and to derive behavioral characteristics which may enhance knowledge about target identity and/or activity. In this paper we introduce a Bayesian framework for joint tracking and identification and give a robust and computationally efficient particle filter based algorithm for numerical implementation of the resulting recursions. Simulation results illustrating algorithm performance are presented.

Gordon, Neil J.; Maskell, Simon; Kirubarajan, Thiagalingam

2002-08-01

158

Human Behavior-Based Particle Swarm Optimization

Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients c1 and c2 in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost. PMID:24883357

Xu, Gang; Ding, Gui-yan; Sun, Yu-bo

2014-01-01

159

Human behavior-based particle swarm optimization.

Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients c 1 and c 2 in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost. PMID:24883357

Liu, Hao; Xu, Gang; Ding, Gui-Yan; Sun, Yu-Bo

2014-01-01

160

ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435

High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacity of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1 Code sections FC and FK[1]. (authors)

Stillo, Andrew [Camfil Farr, 1 North Corporate Drive, Riverdale, NJ 07457 (United States)] [Camfil Farr, 1 North Corporate Drive, Riverdale, NJ 07457 (United States); Ricketts, Craig I. [New Mexico State University, Department of Engineering Technology and Surveying Engineering, P.O. Box 30001 MSC 3566, Las Cruces, NM 88003-8001 (United States)] [New Mexico State University, Department of Engineering Technology and Surveying Engineering, P.O. Box 30001 MSC 3566, Las Cruces, NM 88003-8001 (United States)

2013-07-01

161

Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization

NASA Technical Reports Server (NTRS)

The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.

Birge, Brian

2013-01-01

162

NASA Astrophysics Data System (ADS)

Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Moreover, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, for a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.

Diaz-Ramirez, Victor H.; Cuevas, Andres; Kober, Vitaly; Trujillo, Leonardo; Awwal, Abdul

2015-03-01

163

NASA Astrophysics Data System (ADS)

Bearings-only tracking is widely used in the defense arena. Its value can be exploited in systems using optical sensors and sonar, among others. Non-linearity and non-Gaussian prior statistics are among the complications of bearings-only tracking. Several filters have been used to overcome these obstacles, including particle filters and multiple hypothesis extended Kalman filters (MHEKF). Particle filters can accommodate a wide range of distributions and do not need to be linearized. Because of this they seem ideally suited for this problem. A MHEKF can only approximate the prior distribution of a bearings-only tracking scenario and needs to be linearized. However, the likelihood distribution maintained for each MHEKF hypothesis demonstrates significant memory and lends stability to the algorithm, potentially enhancing tracking convergence. Also, the MHEKF is insensitive to outliers. For the scenarios under investigation, the sensor platform is tracking a moving and a stationary target. The sensor is allowed to maneuver in an attempt to maximize tracking performance. For these scenarios, we compare and contrast the acquisition time and mean-squared tracking error performance characteristics of particle filters and MHEKF via Monte Carlo simulation.

Zaugg, David A.; Samuel, Alphonso A.; Waagen, Donald E.; Schmitt, Harry A.

2004-07-01

164

Microstructure and particle-laden flow in diesel particulate filter

Due to the public awareness with regard to harmful diesel emissions, more strict diesel emissions standards such as Euro V in 2008 are being set in the world. As one of the key technologies, a diesel particulate filter (DPF) has been developed to reduce particulate matters (PM) in the after-treatment of exhaust gas. Since the structure of the filter is

Kazuhiro Yamamoto; Shingo Satake; Hiroshi Yamashita

2009-01-01

165

Cosmological parameter estimation using Particle Swarm Optimization

NASA Astrophysics Data System (ADS)

Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

Prasad, J.; Souradeep, T.

2014-03-01

166

NASA Astrophysics Data System (ADS)

For many dynamic estimation problems involving nonlinear and/or non-Gaussian models, particle filtering offers improved performance at the expense of computational effort. This paper describes a scheme for efficiently tracking multiple targets using particle filters. The tracking of the individual targets is made efficient through the use of Rao-Blackwellisation. The tracking of multiple targets is made practicable using Quasi-Monte Carlo integration. The efficiency of the approach is illustrated on synthetic data.

Maskell, Simon; Rollason, Malcolm P.; Gordon, Neil J.; Salmond, David J.

2002-08-01

167

Enumerating and Disinfecting Bacteria Associated With Particles Released From GAC Filter-Adsorbers

Granular activated carbon (GAC) in filter-adsorbers provides an excellent support surface for the proliferation of microorganisms. Therefore, GAC beds may release particles of carbon with attached bacteria that are protected from disinfection. In this pilot-plant study, particles were collected from the product waters of GAC filter-adsorbers, examined for bacterial colonization, and characterized by energy-dispersive X-ray analysis. Results showed that bacteria

William T. Stringfellow; Kathryn Mallon; Francis A. DiGiano

1993-01-01

168

GPU-Accelerated Particle Filtering for 3D Model-Based Visual Tracking

Model-based approaches to 3D object tracking and pose estimation that employ a particle filter are effective and robust , but computational complexity limits their efficacy in real-time scenarios. This thesis describes a novel framework for acceleration of particle filtering approaches to 3D model-based, markerless visual tracking in monocular video using a graphics processing unit (GPU). Specifically, NVIDIA compute unified device

J. Anthony Brown

2010-01-01

169

Array of micro-machined mass energy micro-filters for charged particles

NASA Technical Reports Server (NTRS)

An energy filter for charged particles includes a stack of micro-machined wafers including plural apertures passing through the stack of wafers, focusing electrodes bounding charged particle paths through the apertures, an entrance orifice to each of the plural apertures and an exit orifice from each of the plural apertures and apparatus for biasing the focusing electrodes with an electrostatic potential corresponding to an energy pass band of the filter.

Stalder, Roland E. (Inventor); Van Zandt, Thomas R. (Inventor); Hecht, Michael H. (Inventor); Grunthaner, Frank J. (Inventor)

1996-01-01

170

Modeling Gene Regulatory Networks from Time Series Data using Particle Filtering

MODELING GENE REGULATORY NETWORKS FROM TIME SERIES DATA USING PARTICLE FILTERING A Thesis by AMINA NOOR Submitted to the O ce of Graduate Studies of Texas A&M University in partial ful llment of the requirements for the degree of MASTER... OF SCIENCE August 2011 Major Subject: Electrical Engineering MODELING GENE REGULATORY NETWORKS FROM TIME SERIES DATA USING PARTICLE FILTERING A Thesis by AMINA NOOR Submitted to the O ce of Graduate Studies of Texas A&M University in partial ful...

Noor, Amina

2012-10-19

171

A method for optimizing multipass laser amplifier output utilizes a spectral filter in early passes but not in later passes. The pulses shift position slightly for each pass through the amplifier, and the filter is placed such that early passes intersect the filter while later passes bypass it. The filter position may be adjust offline in order to adjust the number of passes in each category. The filter may be optimized for use in a cryogenic amplifier.

Backus, Sterling J. (Erie, CO); Kapteyn, Henry C. (Boulder, CO)

2007-07-10

172

Optimal Planning of Harmonic Filters in an Industrial Plant Considering Uncertainty Conditions

This paper presents an integrated approach feasible direction method and genetic algorithm (FDM+GA) to investigate the planning of large-scale passive harmonic filters. The optimal filter scheme can be obtained from a system under abundant harmonic current sources where harmonic amplification problems should be avoided. The constraints of harmonics with orders lower than the filter tuned-points have been set stricter to

SHU-CHEN WANG; CHI-JUI WU; Ying-Pin Chang

2007-01-01

173

Optimization of Pleated Filter Designs Using a Finite-Element Numerical Model

A numerical model has been developed to optimize the design of pleated filter panels. In this model, the fluid flow is modeled by a steady laminar flow and the filter media resistance is governed by the Darcy-Lapwood-Brinkman equation. A finite element method with a nine-node Lagrangian element is used to solve the governing equations. For the rectangularly pleated filter panel,

Da-Ren Chen; David Y. H. Pui; Benjamin Y. H. Liu

1995-01-01

174

NASA Astrophysics Data System (ADS)

An effective optimization approach to the inverse design problems of complex fiber Bragg grating filters is developed in the present paper. Based on a multi-objective evolutionary programming (MOEP) algorithm, the proposed method can efficiently search for optimal solutions and simultaneously take into account various requirements of the designed filter. To improve the efficiency of the MOEP based algorithm, an adaptive mutation process is proposed and verified. One of the advantages of the proposed optimization method is the capability to impose additional constrains on the desired coupling coefficient, which ensures the convenience and possibility for actually fabricating the designed devices with the commercially available photosensitive fibers. To verify the effectiveness of the proposed method, an optimal narrowband dispersionless fiber Bragg grating filter for DWDM optical fiber communication systems is designed. We successfully demonstrate that complicated dispersionless FBG filters with short grating lengths and smooth dispersion profiles can be obtained by using the proposed algorithm.

Lee, Cheng-Ling; Lai, Yinchieh

2004-05-01

175

Inertial measurement unit calibration using Full Information Maximum Likelihood Optimal Filtering

The robustness of Full Information Maximum Likelihood Optimal Filtering (FIMLOF) for inertial measurement unit (IMU) calibration in high-g centrifuge environments is considered. FIMLOF uses an approximate Newton's Method ...

Thompson, Gordon A. (Gordon Alexander)

2005-01-01

176

Performance Optimization of a Photovoltaic Generator with an Active Power Filter Application

1 Performance Optimization of a Photovoltaic Generator with an Active Power Filter ApplicationP photovoltaic power stocks gain GPV photovoltaic generator h harmonic Range MPPT Maximum Power Point Tracking PV Generator with an Active Power Filter Application," International Journal on Engineering Applications, vol

Paris-Sud XI, Université de

177

DMT bit rate maximization with optimal time domain equalizer filter bank architecture

In a multicarrier modulation system, a time domain equalizer (TEQ) traditionally shortens the transmission channel impulse response (CIR) to mitigate intersymbol interference (ISI). In this paper, we propose a data-rate optimal TEQ filter bank whose data rates at the equalizer output of this filter bank are significantly better than those of the Maximum Bit Rate and Minimum ISI methods and

Milos Milosevic; Lucio F. C. Pessoa; Brian L. Evans; Ross Baldick

2002-01-01

178

Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization

Environmentally realistic fingerprint-image generation with evolutionary filter-bank optimization t i c l e i n f o Keywords: Fingerprint image generation Evolutionary algorithm Image filters Input pressure a b s t r a c t Constructing a fingerprint database is important to evaluate the performance

Cho, Sung-Bae

179

Hybrid Kalman/H?filter in designing optimal navigation of vehicle in PRT System

NASA Astrophysics Data System (ADS)

PRT( Personal Rapid Transit ) system is a automated operation, so that it is important exactly finding position of vehicle. Many of PRT system has accepted the GPS system for a position, speed, and direction. in this paper, we propose a combination of Kalman Filter and H? Filter known as Hybrid Kalman/ H? Filter for applying to GPS navigation algorithm. For disturbance cancellation, Kalman Filter is optimal but it requires the statistical information about process and measurement noises while H? Filter only minimizes the "worst-case" error and requires that the noises are bounded. The new Hybrid Filter is expected to reduce the worst-case error and exploit the incomplete knowledge about noises to provide a better estimation. The experiment shows us the ability of Hybrid Filter in GPS navigation algorithm.

Kim, Hyunsoo; Nguyen, Hoang Hieu; Nguyen, Phi Long; Kim, Han Sil; Jang, Young Hwan; Ryu, Myungseon; Choi, Changho

2007-12-01

180

Optimal Filters with Multiple Packet Losses and its Application in Wireless Sensor Networks

This paper is concerned with the filtering problem for both discrete-time stochastic linear (DTSL) systems and discrete-time stochastic nonlinear (DTSN) systems. In DTSL systems, an linear optimal filter with multiple packet losses is designed based on the orthogonal principle analysis approach over unreliable wireless sensor networks (WSNs), and the experience result verifies feasibility and effectiveness of the proposed linear filter; in DTSN systems, an extended minimum variance filter with multiple packet losses is derived, and the filter is extended to the nonlinear case by the first order Taylor series approximation, which is successfully applied to unreliable WSNs. An application example is given and the corresponding simulation results show that, compared with extended Kalman filter (EKF), the proposed extended minimum variance filter is feasible and effective in WSNs. PMID:22319301

Liu, Yonggui; Xu, Bugong; Feng, Linfang; Li, Shanbin

2010-01-01

181

Fibonacci sequence, golden section, Kalman filter and optimal control

A connection between the Kalman filter and the Fibonacci sequence is developed. More precisely it is shown that, for a scalar random walk system in which the two noise sources (process and measurement noise) have equal variance, the Kalman filter's estimate turns out to be a convex linear combination of the a priori estimate and of the measurements with coefficients

Alessio Benavoli; Luigi Chisci; Alfonso Farina

2009-01-01

182

An optimal modification of a Kalman filter for time scales

NASA Technical Reports Server (NTRS)

The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

Greenhall, C. A.

2003-01-01

183

Optimal Filtering of Source Address Prefixes: Models and Algorithms

malicious traffic is filtering: access control lists (ACLs) can selectively block traffic based on fields of the IP header. Filters (ACLs) are already available in the routers today but are a scarce resource today via access control lists (ACLs), which allow a router to match This work was supported by the NSF

Markopoulou, Athina

184

SEAL'06, Hefei, China 1 Particle Swarm Optimization

SEAL'06, Hefei, China 1 Particle Swarm Optimization A tutorial prepared for SEAL'06 Xiaodong Li schooling (from Wikipedia). #12;SEAL'06, Hefei, China 2 4/10/2006 5 Swarm Intelligence Mind is social in the facility." (Michael Crichton, 2002) #12;SEAL'06, Hefei, China 3 4/10/2006 9 Particle Swarm Optimization

Li, Xiaodong

185

Cosmological parameter estimation using particle swarm optimization

NASA Astrophysics Data System (ADS)

Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit ? cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.

Prasad, Jayanti; Souradeep, Tarun

2012-06-01

186

Optease Vena Cava Filter Optimal Indwelling Time and Retrievability

The purpose of this study was to assess the indwelling time and retrievability of the Optease IVC filter. Between 2002 and 2009, a total of 811 Optease filters were inserted: 382 for prophylaxis in multitrauma patients and 429 for patients with venous thromboembolic (VTE) disease. In 139 patients [97 men and 42 women; mean age, 36 (range, 17-82) years], filter retrieval was attempted. They were divided into two groups to compare change in retrieval policy during the years: group A, 60 patients with filter retrievals performed before December 31 2006; and group B, 79 patients with filter retrievals from January 2007 to October 2009. A total of 128 filters were successfully removed (57 in group A, and 71 in group B). The mean filter indwelling time in the study group was 25 (range, 3-122) days. In group A the mean indwelling time was 18 (range, 7-55) days and in group B 31 days (range, 8-122). There were 11 retrieval failures: 4 for inability to engage the filter hook and 7 for inability to sheathe the filter due to intimal overgrowth. The mean indwelling time of group A retrieval failures was 16 (range, 15-18) days and in group B 54 (range, 17-122) days. Mean fluoroscopy time for successful retrieval was 3.5 (range, 1-16.6) min and for retrieval failures 25.2 (range, 7.2-62) min. Attempts to retrieve the Optease filter can be performed up to 60 days, but more failures will be encountered with this approach.

Rimon, Uri, E-mail: rimonu@sheba.health.gov.il; Bensaid, Paul, E-mail: paulbensaid@hotmail.com; Golan, Gil, E-mail: gilgolan201@gmail.com; Garniek, Alexander, E-mail: garniek@gmail.com; Khaitovich, Boris, E-mail: borislena@012.net.il [Chaim Sheba Medical Center (Affiliated to the Sackler School of Medicine, Tel-Aviv University, Tel-Aviv), Department of Diagnostic Imaging (Israel); Dotan, Zohar, E-mail: Zohar.Dotan@sheba.health.gov.il [Chaim Sheba Medical Center (Affiliated to the Sackler School of Medicine, Tel-Aviv University, Tel-Aviv), Department of Urology (Israel); Konen, Eli, E-mail: Eli.Konen@sheba.health.gov.il [Chaim Sheba Medical Center (Affiliated to the Sackler School of Medicine, Tel-Aviv University, Tel-Aviv), Department of Diagnostic Imaging (Israel)

2011-06-15

187

A Novel Optimizer Based on Particle Swarm Optimizer and LBG for Vector Quantization In Image Coding

This paper presents an optimizer based on particle swarm optimization and LBG (PSO-LBG) for vector quantization in image coding. Three swarms, including two initial swarms and one elitist swarm whose particles are selected from two initial swarms respectively, are applied to find the global optimum. At each iteration of a swarm's updating process, particles perform the basic operations of PSO,

Huilian Liao; Yiwei Wang; Jiarui Zhou; Zhen Ji

2007-01-01

188

Assessing consumption of bioactive micro-particles by filter-feeding Asian carp

Silver carp Hypophthalmichthys molitrix (SVC) and bighead carp H. nobilis (BHC) have impacted waters in the US since their escape. Current chemical controls for aquatic nuisance species are non-selective. Development of a bioactive micro-particle that exploits filter-feeding habits of SVC or BHC could result in a new control tool. It is not fully understood if SVC or BHC will consume bioactive micro-particles. Two discrete trials were performed to: 1) evaluate if SVC and BHC consume the candidate micro-particle formulation; 2) determine what size they consume; 3) establish methods to evaluate consumption of filter-feeders for future experiments. Both SVC and BHC were exposed to small (50-100 ?m) and large (150-200 ?m) micro-particles in two 24-h trials. Particles in water were counted electronically and manually (microscopy). Particles on gill rakers were counted manually and intestinal tracts inspected for the presence of micro-particles. In Trial 1, both manual and electronic count data confirmed reductions of both size particles; SVC appeared to remove more small particles than large; more BHC consumed particles; SVC had fewer overall particles in their gill rakers than BHC. In Trial 2, electronic counts confirmed reductions of both size particles; both SVC and BHC consumed particles, yet more SVC consumed micro-particles compared to BHC. Of the fish that ate micro-particles, SVC consumed more than BHC. It is recommended to use multiple metrics to assess consumption of candidate micro-particles by filter-feeders when attempting to distinguish differential particle consumption. This study has implications for developing micro-particles for species-specific delivery of bioactive controls to help fisheries, provides some methods for further experiments with bioactive micro-particles, and may also have applications in aquaculture.

Jensen, Nathan R.; Amberg, Jon J.; Luoma, James A.; Walleser, Liza R.; Gaikowski, Mark P.

2012-01-01

189

Optimization of the extended terminal subfluidization wash (ETSW) filter backwashing procedure

The increased passage of particles and microorganisms through granular media filters immediately following backwashing is a common problem known to the water treatment community as filter ripening or maturation. While several strategies have been developed over the years to reduce the impact of this vulnerable period of the filtration cycle on finished water quality, this research involves a recently developed

James E. Amburgey

2005-01-01

190

We present general methodology for sequential inference in nonlinear stochastic state-space models to simultaneously estimate dynamic states and fixed parameters. We show that basic particle filters may fail due to degeneracy in fixed parameter estimation and suggest the use of a kernel density approximation to the filtered distribution of the fixed parameters to allow the fixed parameters to regenerate. In addition, we show that "seemingly" uninformative uniform priors on fixed parameters can affect posterior inferences and suggest the use of priors bounded only by the support of the parameter. We show the negative impact of using multinomial resampling and suggest the use of either stratified or residual resampling within the particle filter. As a motivating example, we use a model for tracking and prediction of a disease outbreak via a syndromic surveillance system. Finally, we use this improved particle filtering methodology to relax prior assumptions on model parameters yet still provide reasonable estimates for model parameters and disease states. PMID:25016201

Sheinson, Daniel M; Niemi, Jarad; Meiring, Wendy

2014-09-01

191

Binary particle swarm optimization for operon prediction

An operon is a fundamental unit of transcription and contains specific functional genes for the construction and regulation of networks at the entire genome level. The correct prediction of operons is vital for understanding gene regulations and functions in newly sequenced genomes. As experimental methods for operon detection tend to be nontrivial and time consuming, various methods for operon prediction have been proposed in the literature. In this study, a binary particle swarm optimization is used for operon prediction in bacterial genomes. The intergenic distance, participation in the same metabolic pathway, the cluster of orthologous groups, the gene length ratio and the operon length are used to design a fitness function. We trained the proper values on the Escherichia coli genome, and used the above five properties to implement feature selection. Finally, our study used the intergenic distance, metabolic pathway and the gene length ratio property to predict operons. Experimental results show that the prediction accuracy of this method reached 92.1%, 93.3% and 95.9% on the Bacillus subtilis genome, the Pseudomonas aeruginosa PA01 genome and the Staphylococcus aureus genome, respectively. This method has enabled us to predict operons with high accuracy for these three genomes, for which only limited data on the properties of the operon structure exists. PMID:20385582

Chuang, Li-Yeh; Tsai, Jui-Hung; Yang, Cheng-Hong

2010-01-01

192

Binary particle swarm optimization for operon prediction.

An operon is a fundamental unit of transcription and contains specific functional genes for the construction and regulation of networks at the entire genome level. The correct prediction of operons is vital for understanding gene regulations and functions in newly sequenced genomes. As experimental methods for operon detection tend to be nontrivial and time consuming, various methods for operon prediction have been proposed in the literature. In this study, a binary particle swarm optimization is used for operon prediction in bacterial genomes. The intergenic distance, participation in the same metabolic pathway, the cluster of orthologous groups, the gene length ratio and the operon length are used to design a fitness function. We trained the proper values on the Escherichia coli genome, and used the above five properties to implement feature selection. Finally, our study used the intergenic distance, metabolic pathway and the gene length ratio property to predict operons. Experimental results show that the prediction accuracy of this method reached 92.1%, 93.3% and 95.9% on the Bacillus subtilis genome, the Pseudomonas aeruginosa PA01 genome and the Staphylococcus aureus genome, respectively. This method has enabled us to predict operons with high accuracy for these three genomes, for which only limited data on the properties of the operon structure exists. PMID:20385582

Chuang, Li-Yeh; Tsai, Jui-Hung; Yang, Cheng-Hong

2010-07-01

193

Modified particle swarm optimized MIMO FLC for complex industrial process

This paper presents a modified particle swarm optimization (MPSO) algorithm to design an optimal multi input multi out (MIMO) fuzzy logic controller for a cement mill process. The membership function, rule base and the scaling factor of the multi input multi output FLC is tuned for optimal control performance using MPSO by minimizing the Integral absolute error for minimum and

P. Subbaraj; P. S. Godwin Anand

2010-01-01

194

Particle filtering for tracking of GLUT4 vesicles in TIRF microscpy

NASA Astrophysics Data System (ADS)

GLUT4 is responsible for insulin-stimulated glucose uptake into fat cells and description of the dynamic behavior of it can give insight in some working mechanisms and structures of these cells. Quantitative analysis of the dynamical process requires tracking of hundreds of GLUT4 vesicles characterized as bright spots in noisy image sequences. In this paper, a 3D tracking algorithm built in Bayesian probabilistic framework is put forward, combined with the unique features of the TIRF microscopy. A brightness-correction procedure is firstly applied to ensure that the intensity of a vesicle is constant along time and is only affected by spatial factors. Then, tracking is formalized as a state estimation problem and a developed particle filter integrated by a sub-optimizer that steers the particles towards a region with high likelihood is used. Once each tracked vesicle is located in image plane, the depth information of a granule can be indirectly inferred according to the exponential relationship between its intensity and its vertical position. The experimental results indicate that the vesicles are tracked well under different motion styles. More, the algorithm provides the depth information of the tracked vesicle.

Wu, Xiangping; Liu, Xiaofang; Xu, Wenglong; Yan, Dandan; Chen, Yongli

2009-10-01

195

Synthesis and analysis of an optimal filtering algorithm for discrete signals with anomalous noise

NASA Astrophysics Data System (ADS)

The paper examines the synthesis of a nonbiased filter which is optimal in the rms sense in the class of linear filters. The synthesis is carried out for the case of the reception of a multidimensional Gaussian Markovian signal on a background of a mixture of constant Gaussian noise and anomalous noise with a partial known a priori description. The synthesized filter is shown to be invariant to the matrix of anomalous-noise intensity. The optimality of a procedure for the exclusion of anomalous observations is demonstrated for a particular case.

Demin, N. S.; Zhadan, L. I.

1984-02-01

196

NASA Astrophysics Data System (ADS)

A new technique for reliably identifying point sources in millimeter/submillimeter wavelength maps is presented. This method accounts for the frequency dependence of noise in the Fourier domain as well as nonuniformities in the coverage of a field. This optimal filter is an improvement over commonly-used matched filters that ignore coverage gradients. Treating noise variations in the Fourier domain as well as map space is traditionally viewed as a computationally intensive problem. We show that the penalty incurred in terms of computing time is quite small due to casting many of the calculations in terms of FFTs and exploiting the absence of sharp features in the noise spectra of observations. Practical aspects of implementing the optimal filter are presented in the context of data from the AzTEC bolometer camera. The advantages of using the new filter over the standard matched filter are also addressed in terms of a typical AzTEC map.

Perera, T. A.; Wilson, G. W.; Scott, K. S.; Austermann, J. E.; Schaar, J. R.; Mancera, A.

2013-07-01

197

EFFICIENT PARTICLE-PAIR FILTERING FOR ACCELERATION OF MOLECULAR DYNAMICS SIMULATION

EFFICIENT PARTICLE-PAIR FILTERING FOR ACCELERATION OF MOLECULAR DYNAMICS SIMULATION Matt Chiu ABSTRACT The acceleration of molecular dynamics (MD) simulations using high performance reconfigurable: determining the short-range force between particle pairs. In particular, we present the first FPGA study

Herbordt, Martin

198

AIR FILTER PARTICLE-SIZE EFFICIENCY TESTING FOR DIAMETERS GREATER THAN 1UM

The paper discusses tests of air filter particle-size efficiency for diameters greater than 1 micrometer. valuation of air cleaner efficiencies in this size range can be quite demanding, depending on the required accuracy. uch particles have sufficient mass to require considerati...

199

Rao-Blackwellised Particle Filtering for Fault Diagnosis

We tackle the fault diagnosis problem using conditionally Gaussian state space models and an efficient Monte Carlo method known as Rao-Blackwellised parti- cle filtering. In this setting, there is one different linear- Gaussian state space model for each possible discrete state of operation. The task of diagnosis is to identify the dis- crete state of operation using the continuous measurements

Nando de Freitas

2001-01-01

200

Terrain Aided Underwater Navigation Using Point Mass and Particle Filters

This paper focuses on obtaining submerged position fixes for underwater vehicles from comparing bathymetric mea- surements with a bathymetric map. Our algorithms are tested on real data, collected by a HUGIN AUV equipped with a multibeam echo sounder (MBE). Due to our strongly non-linear and non-Gaussian problem, local linearization methods such as the extended Kalman filter (EKF), has proven unsuitable

Kjetil Bergh; Oddvar Hallingstad

201

Optimal Filter Estimation for Lucas-Kanade Optical Flow

Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

Sharmin, Nusrat; Brad, Remus

2012-01-01

202

Optimized interpolation filters for compatible pyramidal coding of TV and HDTV

NASA Astrophysics Data System (ADS)

This paper deals with the question of optimizing the filters in the upsampling stage of a TV/HDTV compatible pyramidal coder. From a coding gain point of view, both the decimation and upsampling filters should be optimized. In the frame of compatible coding, not only the coding efficiency influences the choice of the decimation filter but also the compatible image quality. Therefore, assuming this filter has been fixed, we analyze the question of optimizing the upsampling filter in order to obtain the highest coding gain. This question is addressed for a mean squared error (MSE) criterion. In addition, assuming the base layer (TV) signal can be quantized, the influence on the quantization noise on the optimal interpolation filter is investigated and the problem is handled for the MSE criterion. As the statistical properties of pictures are required in the optimization, a model is then developed to compute these properties when there is motion. The model takes into account the processing of progressive sources and, concerning interlaced sequences, the independent processing of fields or the processing of merged fields. Results are then derived for the three types of processing.

Cuvelier, Laurent; Macq, Benoit M. M.; Maison, Benoit; Vandendorpe, Luc

1993-10-01

203

In this paper, various novel heuristic stochastic search techniques have been proposed for optimization of proportionalintegralderivative gains used in Sugeno fuzzy logic based automatic generation control of multi-area thermal generating plants. The techniques are classical particle swarm optimization, hybrid particle swarm optimizations and hybrid genetic algorithm simulated annealing. Numerical results show that all optimization techniques are more or less equally

S. P. Ghoshal

2004-01-01

204

NASA Astrophysics Data System (ADS)

Correlation filters for object recognition represent an attractive alternative to feature based methods. These filters are usually synthesized as a combination of several training templates. These templates are commonly chosen in an ad-hoc manner by the designer, therefore, there is no guarantee that the best set of templates is chosen. In this work, we propose a new approach for the design of composite correlation filters using a multi-objective evolutionary algorithm in conjunction with a variable length coding technique. Given a vast search space of feasible templates, the algorithm finds a subset that allows the construction of a filter with an optimized performance in terms of several performance metrics. The resultant filter is capable of recognizing geometrically distorted versions of a target in high cluttering and noisy conditions. Computer simulation results obtained with the proposed approach are presented and discussed in terms of several performance metrics. These results are also compared to those obtained with existing correlation filters.

Serrano Trujillo, Alejandra; Daz Ramrez, Vctor H.; Trujillo, Leonardo

2013-09-01

205

A note on optimal filtering in the presence of unknown biases

NASA Technical Reports Server (NTRS)

This note considers some aspects of the optimal filtering problem for linear processes in the presence of unknown biases in the input and the observations. It is proved via duality that the optimal filtering problem in the presence of an input bias is equivalent to a certain optimal regulator problem incorporating integral feedback. The question of observability of the augmented system used in the state and bias estimation is answered by deriving necessary and sufficient conditions when bias is present (1) in the input, (2) in the observations and (3) both in the input and the observations.

Joshi, S. M.

1975-01-01

206

Parallel sorting algorithms for optimizing particle simulations

Real world particle simulation codes have to handle a huge number of particles and their interactions. Thus, parallel implementations are required to get suitable production codes. Parallel sorting is often used to organize the set of particles or to redistribute data for locality and load balancing concerns. In this article, the use and design of parallel sorting algorithms for parallel

Michael Hofmann; G. Runger; P. Gibbon; R. Speck

2010-01-01

207

NASAL FILTERING OF FINE PARTICLES IN CHILDREN VS. ADULTS

Nasal efficiency for removing fine particles may be affected by developmental changes in nasal structure associated with age. In healthy Caucasian children (age 6-13, n=17) and adults (age 18-28, n=11) we measured the fractional deposition (DF) of fine particles (1 and 2um MMAD)...

208

The aim of this paper is to demonstrate the potential power of large-scale particle filtering for the parameter estimations of in silico biological pathways where time course measurements of biochemical reactions are observable. The method of particle filtering has been a popular technique in the field of statistical science, which approximates posterior distributions of model parameters of dynamic system by using sequentially-generated Monte Carlo samples. In order to apply the particle filtering to system identifications of biological pathways, it is often needed to explore the posterior distributions which are defined over an exceedingly high-dimensional parameter space. It is then essential to use a fairly large amount of Monte Carlo samples to obtain an approximation with a high-degree of accuracy. In this paper, we address some implementation issues on large-scale particle filtering, and then, indicate the importance of large-scale computing for parameter learning of in silico biological pathways. We have tested the ability of the particle filtering with 10(8) Monte Carlo samples on the transcription circuit of circadian clock that contains 45 unknown kinetic parameters. The proposed approach could reveal clearly the shape of the posterior distributions over the 45 dimensional parameter space. PMID:19209704

Nakamura, Kazuyuki; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru; Higuchi, Tomoyuki

2009-01-01

209

This thesis solves the problem of finding the optimal linear noise-reduction filter for linear tomographic image reconstruction. The optimization is data dependent and results in minimizing the mean-square error of the reconstructed image. The error is defined as the difference between the result and the best possible reconstruction. Applications for the optimal filter include reconstructions of positron emission tomographic (PET), X-ray computed tomographic, single-photon emission tomographic, and nuclear magnetic resonance imaging. Using high resolution PET as an example, the optimal filter is derived and presented for the convolution backprojection, Moore-Penrose pseudoinverse, and the natural-pixel basis set reconstruction methods. Simulations and experimental results are presented for the convolution backprojection method.

Sun, W.Y. [Lawrence Berkeley Lab., CA (United States); [California Univ., Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

1993-04-01

210

PARTICLE FILTER WITH EFFICIENT IMPORTANCE SAMPLING AND MODE TRACKING (PF-EIS-MT) AND ITS a practically implementable particle filtering (PF) method called "PF-EIS-MT" for tracking on large dimensional dimensions and (b) direct application of PF requires an impractically large number of particles. PF-EIS

Vaswani, Namrata

211

An alternative to the well-established Fourier transform infrared (FT-IR) spectrometry, termed discrete frequency infrared (DFIR) spectrometry, has recently been proposed. This approach uses narrowband mid-infrared reflectance filters based on guided-mode resonance (GMR) in waveguide gratings, but filters designed and fabricated have not attained the spectral selectivity (? 32 cm(-1)) commonly employed for measurements of condensed matter using FT-IR spectroscopy. With the incorporation of dispersion and optical absorption of materials, we present here optimal design of double-layer surface-relief silicon nitride-based GMR filters in the mid-IR for various narrow bandwidths below 32 cm(-1). Both shift of the filter resonance wavelengths arising from the dispersion effect and reduction of peak reflection efficiency and electric field enhancement due to the absorption effect show that the optical characteristics of materials must be taken into consideration rigorously for accurate design of narrowband GMR filters. By incorporating considerations for background reflections, the optimally designed GMR filters can have bandwidth narrower than the designed filter by the antireflection equivalence method based on the same index modulation magnitude, without sacrificing low sideband reflections near resonance. The reported work will enable use of GMR filters-based instrumentation for common measurements of condensed matter, including tissues and polymer samples. PMID:22109445

Liu, Jui-Nung; Schulmerich, Matthew V; Bhargava, Rohit; Cunningham, Brian T

2011-11-21

212

A Study on Smoothing for Particle-Filtered 3D Human Body Tracking

Stochastic models have become the dominant means of approaching the problem of articulated 3D human body tracking, where approximate\\u000a inference is employed to tractably estimate the high-dimensional (?30D) posture space. Of these approximate inference techniques,\\u000a particle filtering is the most commonly used approach. However filtering only takes into account past observationsalmost\\u000a no body tracking research employs smoothing to improve the

Patrick Peursum; Svetha Venkatesh; Geoff West

2010-01-01

213

Fiber Bragg grating filter using evaporated induced self assembly of silica nano particles

NASA Astrophysics Data System (ADS)

In the present work we conduct a study of fiber filters produced by evaporation of silica particles upon a MM-fiber core. A band filter was designed and theoretically verified using a 2D Comsol simulation model of a 3D problem, and calculated in the frequency domain in respect to refractive index. The fiber filters were fabricated by stripping and chemically etching the middle part of an MM-fiber until the core was exposed. A mono layer of silica nano particles were evaporated on the core using an Evaporation Induced Self-Assembly (EISA) method. The experimental results indicated a broader bandwidth than indicated by the simulations which can be explained by the mismatch in the particle size distributions, uneven particle packing and finally by effects from multiple mode angles. Thus, there are several closely connected Bragg wavelengths that build up the broader bandwidth. The experimental part shows that it is possible by narrowing the particle size distributing and better control of the particle packing, the filter effectiveness can be greatly improved.

Hammarling, Krister; Zhang, Renyung; Manuilskiy, Anatoliy; Nilsson, Hans-Erik

2014-03-01

214

Sparsity Optimization in Design of Multidimensional Filter Networks

They are composed of sparse sub-filters whose high sparsity ensures ... increases the approximation error and degrades the image quality. ..... contrast to L = 1, in the case of L > 1 there is practically no chance to find an exact minimizer of f(x) ..... Cand`es E, Romberg J (2005) l1-MAGIC: Recovery of sparse signals via...

2014-11-22

215

The report describes analysis of three data sets to evaluate the extent of mass loss on Teflon filters due to ammonium nitrate volatilization. The effect on measured mass is site-dependent, and depends on the meteorological conditions and the fraction of PM-10 mass that consists of ammonium nitrate particles. The highest mass loss found in the California Acid Deposition Monitoring Program network occurred during summer daytime in southern California, amounting to 30-50% of the gravimetric mass. The biased mass measurement implies that the Federal Reference Method sampler for fine particles may lead to control strategies that are biased toward sources of fugitive dust and other primary particle emission sources. This analysis also has implications for the speciation monitoring methods being considered by the EPA. Samples must be collected on nylon filters for nitrate analysis, and on Teflon and quartz filters for analysis of mass, elements, and carbon.

Ashbaugh, L.L.; Eldred, R.A.

1998-09-01

216

Terrain Aided Underwater Navigation Using Point Mass and Particle Filters

AbstractThis paper focuses on obtaining submerged,position fixes for underwater,vehicles from,comparing,bathymetric,mea- surements,with a bathymetric,map. Our algorithms,are tested on real data, collected by a HUGIN AUV equipped with a multibeam,echo sounder,(MBE). Due to our strongly non-linear and non-Gaussian problem, local linearization methods such as the extended Kalman filter (EKF), has proven unsuitable in many,terrain types. We therefore focus on two different recursive

Kjetil Bergh Anonsen; Oddvar Hallingstad

2006-01-01

217

NASA Astrophysics Data System (ADS)

Sequential Monte Carlo (SMC) approaches are increasingly being used in watershed hydrology to approximate the evolving posterior distribution of model parameters and states when new streamflow or other data are becoming available. The typical implementation of SMC requires the use of a set of particles to represent the posterior probability density function (pdf) of model parameters and states. These particles are propagated forward in time and/or space using the (nonlinear) model operator and updated when new observational data become available. Main difficulty in applying particle filters in practice is problems with ensemble degeneracy, in which an increasing number of particles is exploring unproductive parts of the posterior pdf and assigned a negligible weight. To ensure sufficient particle diversity at every stage during the simulation, I will present an efficient SMC scheme that combines particle filtering with importance resampling and DiffeRential Evolution Adaptive Metropolis (DREAM) sampling. Our method is based on the DREAM adaptive MCMC scheme presented in Vrugt et al. (2009), but implemented sequentially to facilitate posterior tracking of model parameters and states. Initial results using the Sacramento Soil Moisture Accounting (SAC-SMA) model have shown that our DREAM particle filter has the advantage of requiring far fewer particles than conventional SMC approaches. This significantly speeds up convergence to the evolving limiting distribution, and allows parameter and state inference in spatially distributed hydrologic models.

Vrugt, J. A.

2009-04-01

218

Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications

Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154

Moccia, Antonio

2014-01-01

219

We present the synthesis of multi-channel fiber Bragg grating (MCFBG) filters for dense wavelength-division-multiplexing (DWDM) application by using a simple optimization approach based on a Lagrange multiplier optimization (LMO) method. We demonstrate for the first time that the LMO method can be used to constrain various parameters of the designed MCFBG filters for practical application demands and fabrication requirements. The designed filters have a number of merits, i.e., flat-top and low dispersion spectral response as well as single stage. Above all, the maximum amplitude of the index modulation profiles of the designed MCFBGs can be substantially reduced under the applied constrained condition. The simulation results demonstrate that the LMO algorithm can provide a potential alternative for complex fiber grating filter design problems. PMID:19529515

Lee, Cheng-Ling; Lee, Ray-Kuang; Kao, Yee-Mou

2006-11-13

220

NASA Astrophysics Data System (ADS)

We present the synthesis of multi-channel fiber Bragg grating (MCFBG) filters for dense wavelength-division-multiplexing (DWDM) application by using a simple optimization approach based on a Lagrange multiplier optimization (LMO) method. We demonstrate for the first time that the LMO method can be used to constrain various parameters of the designed MCFBG filters for practical application demands and fabrication requirements. The designed filters have a number of merits, i.e., flat-top and low dispersion spectral response as well as single stage. Above all, the maximum amplitude of the index modulation profiles of the designed MCFBGs can be substantially reduced under the applied constrained condition. The simulation results demonstrate that the LMO algorithm can provide a potential alternative for complex fiber grating filter design problems.

Lee, Cheng-Ling; Lee, Ray-Kuang; Kao, Yee-Mou

2006-11-01

221

NASA Astrophysics Data System (ADS)

Facial recognition is a difficult task due to variations in pose and facial expressions, as well as presence of noise and clutter in captured face images. In this work, we address facial recognition by means of composite correlation filters designed with multi-objective combinatorial optimization. Given a large set of available face images having variations in pose, gesticulations, and global illumination, a proposed algorithm synthesizes composite correlation filters by optimization of several performance criteria. The resultant filters are able to reliably detect and correctly classify face images of different subjects even when they are corrupted with additive noise and nonhomogeneous illumination. Computer simulation results obtained with the proposed approach are presented and discussed in terms of efficiency in face detection and reliability of facial classification. These results are also compared with those obtained with existing composite filters.

Cuevas, Andres; Diaz-Ramirez, Victor H.; Kober, Vitaly; Trujillo, Leonardo

2014-09-01

222

Evaluation of filter media for particle number, surface area and mass penetrations.

The National Institute for Occupational Safety and Health (NIOSH) developed a standard for respirator certification under 42 CFR Part 84, using a TSI 8130 automated filter tester with photometers. A recent study showed that photometric detection methods may not be sensitive for measuring engineered nanoparticles. Present NIOSH standards for penetration measurement are mass-based; however, the threshold limit value/permissible exposure limit for an engineered nanoparticle worker exposure is not yet clear. There is lack of standardized filter test development for engineered nanoparticles, and development of a simple nanoparticle filter test is indicated. To better understand the filter performance against engineered nanoparticles and correlations among different tests, initial penetration levels of one fiberglass and two electret filter media were measured using a series of polydisperse and monodisperse aerosol test methods at two different laboratories (University of Minnesota Particle Technology Laboratory and 3M Company). Monodisperse aerosol penetrations were measured by a TSI 8160 using NaCl particles from 20 to 300 nm. Particle penetration curves and overall penetrations were measured by scanning mobility particle sizer (SMPS), condensation particle counter (CPC), nanoparticle surface area monitor (NSAM), and TSI 8130 at two face velocities and three layer thicknesses. Results showed that reproducible, comparable filtration data were achieved between two laboratories, with proper control of test conditions and calibration procedures. For particle penetration curves, the experimental results of monodisperse testing agreed well with polydisperse SMPS measurements. The most penetrating particle sizes (MPPSs) of electret and fiberglass filter media were ~50 and 160 nm, respectively. For overall penetrations, the CPC and NSAM results of polydisperse aerosols were close to the penetration at the corresponding median particle sizes. For each filter type, power-law correlations between the penetrations measured by different instruments show that the NIOSH TSI 8130 test may be used to predict penetrations at the MPPS as well as the CPC and NSAM results with polydisperse aerosols. It is recommended to use dry air (<20% RH) as makeup air in the test system to prevent sodium chloride particle deliquescing and minimizing the challenge particle dielectric constant and to use an adequate neutralizer to fully neutralize the polydisperse challenge aerosol. For a simple nanoparticle penetration test, it is recommended to use a polydisperse aerosol challenge with a geometric mean of ~50 nm with the CPC or the NSAM as detectors. PMID:22752097

Li, Lin; Zuo, Zhili; Japuntich, Daniel A; Pui, David Y H

2012-07-01

223

Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization

NASA Technical Reports Server (NTRS)

The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.

Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

2002-01-01

224

Optimizing the Choice of Filter Sets for Space Based Imaging Instruments

NASA Astrophysics Data System (ADS)

We investigate the challenge of selecting a limited number of filters for space based imaging instruments such that they are able to address multiple heterogeneous science goals. The number of available filter slots for a mission is bounded by factors such as instrument size and cost. We explore methods used to extract the optimal group of filters such that they complement each other most effectively. We focus on three approaches; maximizing the separation of objects in two-dimensional color planes, SED fitting to select those filter sets that give the finest resolution in fitted physical parameters, and maximizing the orthogonality of physical parameter vectors in N-dimensional color-color space. These techniques are applied to a test-case, a UV/optical imager with space for five filters, with the goal of measuring the properties of local stars through to distant galaxies.

Elliott, Rachel E.; Farrah, Duncan; Petty, Sara M.; Harris, Kathryn Amy

2015-01-01

225

Design of Fractional Order Controllers Based on Particle Swarm Optimization

An intelligent optimization method for designing fractional order PID (FOPID) controllers based on particle swarm optimization (PSO) is presented in this paper. Fractional calculus can provide novel and higher performance extension for FOPID controllers. However, the difficulties of designing FOPID controllers increase, because FOPID controllers append derivative order and integral order in comparison with traditional PID controllers. To design the

Jun-yi Cao; Bing-gang Cao

2006-01-01

226

FPGA Implementation of Optimal Filtering Algorithm for TileCal ROD System

Traditionally, Optimal Filtering Algorithm has been implemented using general purpose programmable DSP chips. Alternatively, new FPGAs provide a highly adaptable and flexible system to develop this algorithm. TileCal ROD is a multi-channel system, where similar data arrives at very high sampling rates and is subject to simultaneous tasks. It include different FPGAs with high I/O and with parallel structures that provide a benefit at a data analysis. The Optical Multiplexer Board is one of the elements presents in TileCal ROD System. It has FPGAs devices that present an ideal platform for implementing Optimal Filtering Algorithm. Actually this algorithm is performing in the DSPs included at ROD Motherboard. This work presents an alternative to implement Optimal Filtering Algorithm.

Torres, J; Castillo, V; Cuenca, C; Ferrer, A; Fullana, E; Gonzlez, V; Hign, E; Poveda, J; Ruiz-Martinez, A; Salvacha, B; Sanchis, E; Solans, C; Valero, A; Valls, J A

2008-01-01

227

Ares-I Bending Filter Design using a Constrained Optimization Approach

NASA Technical Reports Server (NTRS)

The Ares-I launch vehicle represents a challenging flex-body structural environment for control system design. Software filtering of the inertial sensor output is required to ensure adequate stable response to guidance commands while minimizing trajectory deviations. This paper presents a design methodology employing numerical optimization to develop the Ares-I bending filters. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares-I time-varying dynamics and control system can be frozen over a short period of time, the bending filters are designed to stabilize all the selected frozen-time launch control systems in the presence of parameter uncertainty. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constrains minimizes performance degradation caused by the addition of the bending filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The bending filter designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC MAVERIC 6DOF nonlinear time domain simulation.

Hall, Charles; Jang, Jiann-Woei; Hall, Robert; Bedrossian, Nazareth

2008-01-01

228

Optimized superficially porous particles for protein separations.

Continuing interest in larger therapeutic molecules by pharmaceutical and biotech companies provides the need for improved tools for examining these molecules both during the discovery phase and later during quality control. To meet this need, larger pore superficially porous particles with appropriate surface properties (Fused-Core() particles) have been developed with a pore size of 400 ?, allowing large molecules (<500 kDa) unrestricted access to the bonded phase. In addition, a particle size (3.4 ?m) is employed that allows high-efficiency, low-pressure separations suitable for potentially pressure-sensitive proteins. A study of the shell thickness of the new fused-core particles suggests a compromise between a short diffusion path and high efficiency versus adequate retention and mass load tolerance. In addition, superior performance for the reversed-phase separation of proteins requires that specific design properties for the bonded-phase should be incorporated. As a result, columns of the new particles with unique bonded phases show excellent stability and high compatibility with mass spectrometry-suitable mobile phases. This report includes fast separations of intact protein mixtures, as well as examples of very high-resolution separations of larger monoclonal antibody materials and associated variants. Investigations of protein recovery, sample loading and dynamic range for analysis are shown. The advantages of these new 400 ? fused-core particles, specifically designed for protein analysis, over traditional particles for protein separations are demonstrated. PMID:24094750

Schuster, Stephanie A; Wagner, Brian M; Boyes, Barry E; Kirkland, Joseph J

2013-11-01

229

Particle Clogging in Filter Media of Embankment Dams: A Numerical and Experimental Study

NASA Astrophysics Data System (ADS)

The safety of dam structures requires the characterization of the granular filter ability to capture fine-soil particles and prevent erosion failure in the event of an interfacial dislocation. Granular filters are one of the most important protective design elements of large embankment dams. In case of cracking and erosion, if the filter is capable of retaining the eroded fine particles, then the crack will seal and the dam safety will be ensured. Here we develop and apply a numerical tool to thoroughly investigate the migration of fines in granular filters at the grain scale. The numerical code solves the incompressible Navier-Stokes equations and uses a Lagrange multiplier technique which enforces the correct in-domain computational boundary conditions inside and on the boundary of the particles. The numerical code is validated to experiments conducted at the US Army Corps of Engineering and Research Development Center (ERDC). These laboratory experiments on soil transport and trapping in granular media are performed in constant-head flow chamber filled with the filter media. Numerical solutions are compared to experimentally measured flow rates, pressure changes and base particle distributions in the filter layer and show good qualitative and quantitative agreement. To further the understanding of the soil transport in granular filters, we investigated the sensitivity of the particle clogging mechanism to various parameters such as particle size ratio, the magnitude of hydraulic gradient, particle concentration, and grain-to-grain contact properties. We found that for intermediate particle size ratios, the high flow rates and low friction lead to deeper intrusion (or erosion) depths. We also found that the damage tends to be shallower and less severe with decreasing flow rate, increasing friction and concentration of suspended particles. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was sponsored by the Department of Homeland Security (DHS), Science and Technology Directorate, Homeland Security Advanced Research Projects Agency (HSARPA).

Antoun, T.; Kanarska, Y.; Ezzedine, S. M.; Lomov, I.; Glascoe, L. G.; Smith, J.; Hall, R. L.; Woodson, S. C.

2013-12-01

230

Tissue stiffness estimation plays an important role in cancer detection and treatment. The presence of stiffer regions in healthy tissue can be used as an indicator for the possibility of pathological changes. Electrode vibration elastography involves tracking of a mechanical shear wave in tissue using radio-frequency ultrasound echoes. Based on appropriate assumptions on tissue elasticity, this approach provides a direct way of measuring tissue stiffness from shear wave velocity, and enabling visualization in the form of tissue stiffness maps. In this study, two algorithms for shear wave velocity reconstruction in an electrode vibration setup are presented. The first method models the wave arrival time data using a hidden Markov model whose hidden states are local wave velocities that are estimated using a particle filter implementation. This is compared to a direct optimization-based function fitting approach that uses sequential quadratic programming to estimate the unknown velocities and locations of interfaces. The mean shear wave velocities obtained using the two algorithms are within 10%of each other. Moreover, the Youngs modulus estimates obtained from an incompressibility assumption are within 15 kPa of those obtained from the true stiffness data obtained from mechanical testing. Based on visual inspection of the two filtering algorithms, the particle filtering method produces smoother velocity maps. PMID:25285187

Ingle, Atul; Varghese, Tomy

2014-01-01

231

Optimal-adaptive filters for modelling spectral shape, site amplification, and source scaling

This paper introduces some applications of optimal filtering techniques to earthquake engineering by using the so-called ARMAX models. Three applications are presented: (a) spectral modelling of ground accelerations, (b) site amplification (i.e., the relationship between two records obtained at different sites during an earthquake), and (c) source scaling (i.e., the relationship between two records obtained at a site during two different earthquakes). A numerical example for each application is presented by using recorded ground motions. The results show that the optimal filtering techniques provide elegant solutions to above problems, and can be a useful tool in earthquake engineering.

Safak, Erdal

1989-01-01

232

Electret filters are composed of permanently charged electret fibers and are widely used in applications requiring high collection efficiency and low-pressure drop. We tested electret filter media used in manufacturing cabin air filters by applying two different charging states to the test particles. These charging states were achieved by spray electrification through the atomization process and by bipolar ionization with

J. H. Ji; G. N. Bae; S. H. Kang; J. Hwang

2003-01-01

233

Auto-Clustering Using Particle Swarm Optimization and Bacterial Foraging

NASA Astrophysics Data System (ADS)

This paper presents a hybrid approach for clustering based on particle swarm optimization (PSO) and bacteria foraging algorithms (BFA). The new method AutoCPB (Auto-Clustering based on particle bacterial foraging) makes use of autonomous agents whose primary objective is to cluster chunks of data by using simplistic collaboration. Inspired by the advances in clustering using particle swarm optimization, we suggest further improvements. Moreover, we gathered standard benchmark datasets and compared our new approach against the standard K-means algorithm, obtaining promising results. Our hybrid mechanism outperforms earlier PSO-based approaches by using simplistic communication between agents.

Olesen, Jakob R.; Cordero H., Jorge; Zeng, Yifeng

234

NSDL National Science Digital Library

All About Circuits is a website that âprovides a series of online textbooks covering electricity and electronics.â Written by Tony R. Kuphaldt, the textbooks available here are wonderful resources for students, teachers, and anyone who is interested in learning more about electronics. This specific section, Filters, is the eighth chapter in Volume II âAlternating Current (AC). A few of the topics covered in this chapter include: Low-pass filters, High-pass filters, Band-pass filters, Band-stop filters, and Resonant filters. Diagrams and detailed descriptions of concepts are included throughout the chapter to provide users with a comprehensive lesson. Visitors to the site are also encouraged to discuss concepts and topics using the All About Circuits discussion forums (registration with the site is required to post materials).

Kuphaldt, Tony R.

2008-07-02

235

An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images

A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3D optimized blockwise version of the Non Local (NL) means filter [1]. The NL-means filter uses the redundancy of information in the image under study to remove the noise. The performance of the NL-means filter has been already demonstrated for 2D images, but reducing the computational burden is a critical aspect to extend the method to 3D images. To overcome this problem, we propose improvements to reduce the computational complexity. These different improvements allow to drastically divide the computational time while preserving the performances of the NL-means filter. A fully-automated and optimized version of the NL-means filter is then presented. Our contributions to the NL-means filter are: (a) an automatic tuning of the smoothing parameter, (b) a selection of the most relevant voxels, (c) a blockwise implementation and (d) a parallelized computation. Quantitative validation was carried out on synthetic datasets generated with BrainWeb [2]. The results show that our optimized NL-means filter outperforms the classical implementation of the NL-means filter, as well as two other classical denoising methods (Anisotropic Diffusion [3] and Total Variation minimization process [4]) in terms of accuracy (measured by the Peak Signal to Noise Ratio) with low computation time. Finally, qualitative results on real data are presented. PMID:18390341

Coup, Pierrick; Yger, Pierre; Prima, Sylvain; Hellier, Pierre; Kervrann, Charles; Barillot, Christian

2008-01-01

236

Optimization of magnetic switches for single particle and cell transport

The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.

Abedini-Nassab, Roozbeh; Yellen, Benjamin B., E-mail: yellen@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Box 90300 Hudson Hall, Durham, North Carolina 27708 (United States); Joint Institute, University of MichiganShanghai Jiao Tong University, Shanghai Jiao Tong University, Shanghai 200240 (China); Murdoch, David M. [Department of Medicine, Duke University, Durham, North Carolina 27708 (United States); Kim, CheolGi [Department of Emerging Materials Science, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 711-873 (Korea, Republic of)

2014-06-28

237

The conventional microfluidic H filter is modified with multi-insulating blocks to achieve a flow-through manipulation and separation of microparticles. The device transports particles by exploiting electro-osmosis and electrophoresis, and manipulates particles by utilizing dielectrophoresis (DEP). Polydimethylsiloxane (PDMS) blocks fabricated in the main channel of the PDMS H filter induce a nonuniform electric field, which exerts a negative DEP force on the particles. The use of multi-insulating blocks not only enhances the DEP force generated, but it also increases the controllability of the motion of the particles, facilitating their manipulation and separation. Experiments were conducted to demonstrate the controlled flow direction of particles by adjusting the applied voltages and the separation of particles by size under two different input conditions, namely (i) a dc electric field mode and (ii) a combined ac and dc field mode. Numerical simulations elucidate the electrokinetic and hydrodynamic forces acting on a particle, with theoretically predicted particle trajectories in good agreement with those observed experimentally. In addition, the flow field was obtained experimentally with fluorescent tracer particles using the microparticle image velocimetry (mu-PIV) technique. PMID:19693372

Lewpiriyawong, Nuttawut; Yang, Chun; Lam, Yee Cheong

2008-01-01

238

Ceramic barrier filtration is a leading technology employed in hot gas filtration. Hot gases loaded with ash particle flow through the ceramic candle filters and deposit ash on their outer surface. The deposited ash is periodically removed using back pulse cleaning jet, known as surface regeneration. The cleaning done by this technique still leaves some residual ash on the filter surface, which over a period of time sinters, forms a solid cake and leads to mechanical failure of the candle filter. A room temperature testing facility (RTTF) was built to gain more insight into the surface regeneration process before testing commenced at high temperature. RTTF was instrumented to obtain pressure histories during the surface regeneration process and a high-resolution high-speed imaging system was integrated in order to obtain pictures of the surface regeneration process. The objective of this research has been to utilize the RTTF to study the surface regeneration process at the convenience of room temperature conditions. The face velocity of the fluidized gas, the regeneration pressure of the back pulse and the time to build up ash on the surface of the candle filter were identified as the important parameters to be studied. Two types of ceramic candle filters were used in the study. Each candle filter was subjected to several cycles of ash build-up followed by a thorough study of the surface regeneration process at different parametric conditions. The pressure histories in the chamber and filter system during build-up and regeneration were then analyzed. The size distribution and movement of the ash particles during the surface regeneration process was studied. Effect of each of the parameters on the performance of the regeneration process is presented. A comparative study between the two candle filters with different characteristics is presented.

Vasudevan, V.; Kang, B.S-J.; Johnson, E.K.

2002-09-19

239

Cluster Based Sensor Scheduling in a Target Tracking Application with Particle Filtering

Cluster Based Sensor Scheduling in a Target Tracking Application with Particle Filtering ?zgür-Electronics Engineering Istanbul University Istanbul, Turkey hcirpan@istanbul.edu.tr Abstract-- In multi-sensor applications management of sensors is necessary for the classification of data they produce

Bayazit, Ulug

240

Online Selecting Discriminative Tracking Features using Particle Filter Jianyu Wang1

Online Selecting Discriminative Tracking Features using Particle Filter Jianyu Wang1 , Xilin Chen1,2 and Wen Gao1,2 School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China}@jdl.ac.cn Abstract The paper proposes a method to keep the tracker robust to background clutters by online selecting

Chen, Xilin

241

X-RAY FLUORESCENCE ANALYSIS OF FILTER-COLLECTED AEROSOL PARTICLES

X-ray fluorescence (XRF) has become an effective technique for determining the elemental content of aerosol samples. For quantitative analysis, the aerosol particles must be collected as uniform deposits on the surface of Teflon membrane filters. An energy dispersive XRF spectrom...

242

The effects of particle charge on the performance of a filtering facepiece.

This study quantitatively determined the effect of electrostatic charge on the performance of an electret filtering facepiece. Monodisperse challenge corn oil aerosols with uniform charges were generated using a modified vibrating orifice monodisperse aerosol generator. The aerosol size distributions and concentrations upstream and downstream of an electret filter were measured using an aerodynamic particle sizer, an Aerosizer, and a scanning mobility particle sizer. The aerosol charge was measured by using an aerosol electrometer. The tested electret filter had a packing density of about 0.08, fiber size of 3 microns, and thickness of 0.75 mm. As expected, the primary filtration mechanisms for the micrometer-sized particles are interception and impaction, especially at high face velocities, while electrostatic attraction and diffusion are the filtration mechanisms for submicrometer-sized aerosol particles. The fiber charge density was estimated to be 1.35 x 10(-5) coulomb per square meter. After treatment with isopropanol, most of fiber charges were removed, causing the 0.3-micron aerosol penetration to increase from 36 to 68%. The air resistance of the filter increased slightly after immersion in the isopropanol, probably due to the coating of impurities in isopropanol. The aerosol penetration decreased with increasing aerosol charge. The most penetrating aerosol size became larger as the aerosol charge increased, e.g., from 0.32 to 1.3 microns when the aerosol charge increased from 0 to 500 elementary charges. PMID:9586197

Chen, C C; Huang, S H

1998-04-01

243

Innovative Methodology Recursive Bayesian Decoding of Motor Cortical Signals by Particle Filtering

Innovative Methodology Recursive Bayesian Decoding of Motor Cortical Signals by Particle Filtering for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 Submitted 26 of action and also for its potential use in controlling robotic devices (Black et al. 2003; Chapin et al

Kass, Rob

244

Reception State Estimation of GNSS satellites in urban environment using particle filtering

Reception State Estimation of GNSS satellites in urban environment using particle filtering Donnay'Ascq, France Email: juliette.marais@inrets.fr Abstract-- The reception state of a satellite is an unavailable information for Global Navigation Satellite System receivers. His knowledge or estimation can be used

Paris-Sud XI, Université de

245

Particle Filtering for Dynamic Agent Modelling in Simplified Poker Nolan Bard and Michael Bowling

role in the eventual development of world champion poker-playing programs.1 Dynamic behavior is alsoParticle Filtering for Dynamic Agent Modelling in Simplified Poker Nolan Bard and Michael Bowling information, dynamic agents, and the need for fast learning. State estimation techniques, such as Kalman

Bowling, Michael

246

Particle Filtering for Dynamic Agent Modelling in Simplified Poker Nolan Bard and Michael Bowling

role in the eventual development of world champion pokerplaying programs. 1 Dynamic behavior is alsoParticle Filtering for Dynamic Agent Modelling in Simplified Poker Nolan Bard and Michael Bowling information, dynamic agents, and the need for fast learning. State estimation techniques, such as Kalman

Bowling, Michael

247

The problem of multitarget tracking in underwater multistatic active sonobuoy systems is challenging because of the large number of false contacts and multiple reflections that reach the receivers. Targeting a robust solution that can track an unknown time-varying number of multiple targets, while keeping continuous tracks even in scenarios with large number of false contacts per ping, a particle filter

Jacques Georgy; Aboelmagd Noureldin; Garfield R. Mellema

2012-01-01

248

Tracking Human Position and Lower Body Parts Using Kalman and Particle Filters Constrained by

1 Tracking Human Position and Lower Body Parts Using Kalman and Particle Filters Constrained for visual tracking of human body parts is introduced. The presented approach demonstrates the feasibility of recovering human poses with data from a single uncalibrated camera using a limb tracking system based on a 2D

Nebel, Jean-Christophe

249

Flaw profile characterization from nondestructive evaluation (NDE) measurements is a typical inverse problem. A novel transformation of this inverse problem into a tracking problem and subsequent application of a sequential Monte Carlo method called particle filtering has been proposed by the authors in an earlier publication. In this paper, the problem of flaw char- acterization from multisensor data is considered.

Tariq Khan; Pradeep Ramuhalli; Sarat C. Dass

2011-01-01

250

Atmospheric Refractivity Tracking From Radar Clutter Using Kalman and Particle Filters

Atmospheric Refractivity Tracking From Radar Clutter Using Kalman and Particle Filters Caglar the sea clutter measured from sea-borne radars operating in the region. A split-step fast Fourier transform based parabolic equation approximation to the wave equation is used to compute the clutter return

Gerstoft, Peter

251

MCMC-Based Particle Filtering for Tracking a Variable Number of Interacting Targets

importance sampling step in the particle filter with a novel Markov chain Monte Carlo (MCMC) sampling step. In particular, they have important implications for vision-based tracking of animals, which has countless shows 20 insects (ants) being tracked in a small enclosed area. In this case, the targets do not behave

Collins, Robert T.

252

Real-time monitoring of complex industrial processes with particle filters

: an industrial dryer and a level tank. For these appli- cations, we compared three particle filtering variants monitored We analyzed two industrial processes: an industrial dryer and a level-tank. In each of these, we representation was obtained by a standard procedure in control engineering [8]. 3.1 Industrial dryer

Poole, David

253

Prognostics of PEM fuel cell in a particle filtering framework Marine Jouin

Prognostics of PEM fuel cell in a particle filtering framework Marine Jouin , Rafael Gouriveau.jouin@femto-st.fr Abstract Proton Exchange Membrane Fuel Cells (PEMFC) suffer from a limited lifespan, which impedes of the proposed approach. Keywords: Proton exchange membrane (PEM) fuel cell, Prognostics, Remaining useful life

Paris-Sud XI, Université de

254

Gravity inversion of a fault by Particle swarm optimization (PSO).

Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models. PMID:23961391

Toushmalani, Reza

2013-01-01

255

A hierarchical particle swarm optimizer and its adaptive variant.

A hierarchical version of the particle swarm optimization (PSO) metaheuristic is introduced in this paper. In the new method called H-PSO, the particles are arranged in a dynamic hierarchy that is used to define a neighborhood structure. Depending on the quality of their so-far best-found solution, the particles move up or down the hierarchy. This gives good particles that move up in the hierarchy a larger influence on the swarm. We introduce a variant of H-PSO, in which the shape of the hierarchy is dynamically adapted during the execution of the algorithm. Another variant is to assign different behavior to the individual particles with respect to their level in the hierarchy. H-PSO and its variants are tested on a commonly used set of optimization functions and are compared to PSO using different standard neighborhood schemes. PMID:16366251

Janson, Stefan; Middendorf, Martin

2005-12-01

256

NASA Astrophysics Data System (ADS)

In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.

Paasche, H.; Tronicke, J.

2012-04-01

257

Using the innovation analysis method in the time domain, based on the autoregressive moving average (ARMA) innovation model, this paper presents a unified white noise estimation theory that includes both input and measurement white noise estimators, and presents a new steady-state optimal state estimation theory. Non-recursive optimal state estimators are given, whose recursive version gives a steady-state Kalman filter, where

Zi-Li Deng; Huan-Shui Zhang; Shu-Jun Liu; Lu Zhou

1996-01-01

258

Probability hypothesis density (PHD) filtering, implemented using particle filters, is a Bayesian technique used to non-linearly track multiple objects. In this paper, we propose a new approach based on PHD particle filters (PHD-PF) to automatically track the number of magnetoencephalography (MEG) neural dipole sources and their unknown states. In particular, by separating the MEG measurements using independent component analysis, PHD-PF

L. Miao; J. J. Zhang; C. Chakrabarti; A. Papandreou-Suppappola; N. Kovvali

2011-01-01

259

Synthesis of fiber Bragg grating filters for optimal DPSK demodulation

NASA Astrophysics Data System (ADS)

A multiple fiber Bragg grating structure is proposed for optimal demodulation of differential phase shifting keys (DPSK) optical signals. Specific grating design and synthesis are presented for DPSK demodulation at 10 Gbit/s to operate either in reflection or transmission configurations.

Longhi, S.; Gatti, D.; Laporta, P.; Belmonte, M.

2008-10-01

260

Synthesis of fiber Bragg grating filters for optimal DPSK demodulation

A multiple fiber Bragg grating structure is proposed for optimal demodulation of differential phase shifting keys (DPSK) optical signals. Specific grating design and synthesis are presented for DPSK demodulation at 10 Gbit\\/s to operate either in reflection or transmission configurations.

S. Longhi; D. Gatti; P. Laporta; M. Belmonte

2008-01-01

261

Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

2014-01-01

262

Support vector machine based on adaptive acceleration particle swarm optimization.

Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

2014-01-01

263

High-efficiency particulate air (HEPA) filters are widely used to control particulate matter emissions from processes that involve management or treatment of radioactive materials. Section FC of the American Society of Mechanical Engineers AG-1 Code on Nuclear Air and Gas Treatment currently restricts media velocity to a maximum of 2.5 cm/sec in any application where this standard is invoked. There is some desire to eliminate or increase this media velocity limit. A concern is that increasing media velocity will result in higher emissions of ultrafine particles; thus, it is unlikely that higher media velocities will be allowed without data to demonstrate the effect of media velocity on removal of ultrafine particles. In this study, the performance of nuclear grade HEPA filters, with respect to filter efficiency and most penetrating particle size, was evaluated as a function of media velocity. Deep-pleat nuclear grade HEPA filters (31 cm x 31 cm x 29 cm) were evaluated at media velocities ranging from 2.0 to 4.5 cm/sec using a potassium chloride aerosol challenge having a particle size distribution centered near the HEPA filter most penetrating particle size. Filters were challenged under two distinct mass loading rate regimes through the use of or exclusion of a 3 microm aerodynamic diameter cut point cyclone. Filter efficiency and most penetrating particle size measurements were made throughout the duration of filter testing. Filter efficiency measured at the onset of aerosol challenge was noted to decrease with increasing media velocity, with values ranging from 99.999 to 99.977%. The filter most penetrating particle size recorded at the onset of testing was noted to decrease slightly as media velocity was increased and was typically in the range of 110-130 nm. Although additional testing is needed, these findings indicate that filters operating at media velocities up to 4.5 cm/sec will meet or exceed current filter efficiency requirements. Additionally, increased emission of ultrafine particles is seemingly negligible. PMID:18726819

Alderman, Steven L; Parsons, Michael S; Hogancamp, Kristina U; Waggoner, Charles A

2008-11-01

264

A new approach to linear least squares estimation of continuous-time (wide sense) stationary stochastic processes is presented. The basic idea is that the relevant estimates can be ex- pressed not only in terms of the usual (forward) innovation process but also in terms of a backward innovation process. The functions determining the optimal filter as well as the error covariance

Anders Lindquist

1974-01-01

265

In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.

2013-01-01

266

trigger our bottom-up attention when we forget to take the cash from the ATM or when a fire alarmAutomatic detection of auditory salience with optimized linear filters derived from human a particular auditory event attracts human attention. Previous attempts at automatic detection of salient audio

Hasegawa-Johnson, Mark

267

Design formulas for a class of double-terminated optimal filters

Design equations are presented for a class of optimal filters which are RC ladder networks with double and equal terminations. The resulting network has the maximum gain constant and the minimum product of capacity and resistance. The proposed formulas make it possible to avoid the process of formulating impedance or admittance function and expanding a continued fraction, required with the

T. S. Lim; T. N. Lee

1980-01-01

268

Statistical Design and Optimization for Adaptive Post-silicon Tuning of MEMS Filters

]-[6], [14]. MEMS resonators made in low-loss materials (e.g., silicon, polysilicon, aluminum nitride, etc-to-die. In addition, mechanically-coupled MEMS resonators often generate spurious modes out of the intended passStatistical Design and Optimization for Adaptive Post-silicon Tuning of MEMS Filters Fa Wang, Gokce

Li, Xin

269

In this paper proximity effects correction in Electron Beam Lithography by means of an artificial neural network is presented. Supporting approximations to cope with negative doses inherent in Gibbs oscillations which occur from step-like function representation in the Fourier space are introduced. Miller regularization theory as better alternative to Tikhonov one is presented. Optimal filtering with prolate spheriodal wave functions

P. Jedrasik; J. Garcia; B. De Boeck; D Van Dyck

1998-01-01

270

NASA Technical Reports Server (NTRS)

Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

Zaychik, Kirill B.; Cardullo, Frank M.

2012-01-01

271

Optimally apodized ring-resonator filter for DPSK demodulation

Optical micro-ring resonator arrays (OMRA) are versatile elements for designing various photonic integrated circuits and systems for optical signal processing and communication. In this paper, we analyze the performance of an OMRA used for optical demodulation of both single channel DPSK and 3-channel WDM-DPSK signals. Several apodization profiles have been investigated to decide for the optimal performance. It is shown

Raunaq Agarwal; Ranjan Gangopadhyay; Giancarlo Prati; Sumanta Gupta; Paolo Pintus

2009-01-01

272

Methods have been developed to assess the size distribution of alpha emitting particles of reactor fuel of known composition captured on air sampler filters. The sizes of uranium oxide and plutonium oxide particles were determined using a system based on CR-39 solid-state nuclear track detectors. The CR-39 plastic was exposed to the deposited particles across a 400 microm airgap. The exposed CR-39 was chemically etched to reveal clusters of tracks radially dispersed from central points. The number and location of the tracks were determined using an optical microscope with an XY motorised table and image analysis software. The sample mounting arrangement allowed individual particles to be simultaneously viewed with their respective track cluster. The predicted diameters correlated with the actual particle diameters, as measured using the optical microscope. The efficacy of the technique was demonstrated with particles of natural uranium oxide (natUO2) of known size, ranging from 4 to 150 microm in diameter. Two personal air sampler (PAS) filters contaminated with actinide particles were placed against CR-39 and estimated to have size distributions of 0.8 and 1.0 microm activity median aerodynamic diameter (AMAD). PMID:14526944

Richardson, R B; Hegyi, G; Starling, S C

2003-01-01

273

Estimation of the Dynamic States of Synchronous Machines Using an Extended Particle Filter

In this paper, an extended particle filter (PF) is proposed to estimate the dynamic states of a synchronous machine using phasor measurement unit (PMU) data. A PF propagates the mean and covariance of states via Monte Carlo simulation, is easy to implement, and can be directly applied to a non-linear system with non-Gaussian noise. The extended PF modifies a basic PF to improve robustness. Using Monte Carlo simulations with practical noise and model uncertainty considerations, the extended PFs performance is evaluated and compared with the basic PF and an extended Kalman filter (EKF). The extended PF results showed high accuracy and robustness against measurement and model noise.

Zhou, Ning; Meng, Da; Lu, Shuai

2013-11-11

274

Applying a fully nonlinear particle filter on a coupled ocean-atmosphere climate model

NASA Astrophysics Data System (ADS)

It is a widely held assumption that particle filters are not applicable in high-dimensional systems due to filter degeneracy, commonly called the curse of dimensionality. This is only true of naive particle filters, and indeed it has been shown much more advanced methods perform particularly well on systems of dimension up to 216 ? 6.5 104. In this talk we will present results from using the equivalent weights particle filter in twin experiments with the global climate model HadCM3. These experiments have a number of notable features. Firstly the sheer size of model in use is substantially larger than has been previously achieved. The model has state dimension approximately 4 106 and approximately 4 104 observations per analysis step. This is 2 orders of magnitude more than has been achieved with a particle filter in the geosciences. Secondly, the use of a fully nonlinear data assimilation technique to initialise a climate model gives us the possibility to find non-Gaussian estimates for the current state of the climate. In doing so we may find that the same model may demonstrate multiple likely scenarios for forecasts on a multi-annular/decadal timescale. The experiments consider to assimilating artificial sea surface temperatures daily for several years. We will discuss how an ensemble based method for assimilation in a coupled system avoids issues faced by variational methods. Practical details of how the experiments were carried out, specifically the use of the EMPIRE data assimilation framework, will be discussed. The results from applying the nonlinear data assimilation method can always be improved through having a better representation of the model error covariance matrix. We will discuss the representation which we have used for this matrix, and in particular, how it was generated from the coupled system.

Browne, Philip; van Leeuwen, Peter Jan; Wilson, Simon

2014-05-01

275

Genetic algorithm and particle swarm optimization combined with Powell method

NASA Astrophysics Data System (ADS)

In recent years, the population algorithms are becoming increasingly robust and easy to use, based on Darwin's Theory of Evolution, perform a search for the best solution around a population that will progress according to several generations. This paper present variants of hybrid genetic algorithm - Genetic Algorithm and a bio-inspired hybrid algorithm - Particle Swarm Optimization, both combined with the local method - Powell Method. The developed methods were tested with twelve test functions from unconstrained optimization context.

Bento, David; Pinho, Diana; Pereira, Ana I.; Lima, Rui

2013-10-01

276

Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude

NASA Technical Reports Server (NTRS)

Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.

Sedlak, J.

1994-01-01

277

Optimization-based tuning of LPV fault detection filters for civil transport aircraft

NASA Astrophysics Data System (ADS)

In this paper, a two-step optimal synthesis approach of robust fault detection (FD) filters for the model based diagnosis of sensor faults for an augmented civil aircraft is suggested. In the first step, a direct analytic synthesis of a linear parameter varying (LPV) FD filter is performed for the open-loop aircraft using an extension of the nullspace based synthesis method to LPV systems. In the second step, a multiobjective optimization problem is solved for the optimal tuning of the LPV detector parameters to ensure satisfactory FD performance for the augmented nonlinear closed-loop aircraft. Worst-case global search has been employed to assess the robustness of the fault detection system in the presence of aerodynamics uncertainties and estimation errors in the aircraft parameters. An application of the proposed method is presented for the detection of failures in the angle-of-attack sensor.

Ossmann, D.; Varga, A.

2013-12-01

278

A self-learning particle swarm optimizer for global optimization problems.

Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms. PMID:22067435

Li, Changhe; Yang, Shengxiang; Nguyen, Trung Thanh

2012-06-01

279

Decoupled Control Strategy of Grid Interactive Inverter System with Optimal LCL Filter Design

NASA Astrophysics Data System (ADS)

This article presents a control strategy for a three-phase grid interactive voltage source inverter that links a renewable energy source to the utility grid through a LCL-type filter. An optimized LCL-type filter has been designed and modeled so as to reduce the current harmonics in the grid, considering the conduction and switching losses at constant modulation index (Ma). The control strategy adopted here decouples the active and reactive power loops, thus achieving desirable performance with independent control of active and reactive power injected into the grid. The startup transients can also be controlled by the implementation of this proposed control strategy: in addition to this, optimal LCL filter with lesser conduction and switching copper losses as well as core losses. A trade-off has been made between the total losses in the LCL filter and the Total Harmonic Distortion (THD%) of the grid current, and the filter inductor has been designed accordingly. In order to study the dynamic performance of the system and to confirm the analytical results, the models are simulated in the MATLAB/Simulink environment, and the results are analyzed.

Babu, B. Chitti; Anurag, Anup; Sowmya, Tontepu; Marandi, Debati; Bal, Satarupa

2013-09-01

280

Several studies show the increase of penetration through electrostatic filters during exposure to an aerosol flow, because of particle deposition on filter fibers. We studied the effect of increasing loads of paraffin oil aerosol on the penetration of selected particle sizes through an electrostatic filtering facepiece. FFP2 facepieces were exposed for 8 hr to a flow rate of 95.0 0.5 L/min of polydisperse paraffin aerosol at 20.0 0.5 mg/m(3). The penetration of bis(2-ethylhexyl)sebacate (DEHS) monodisperse neutralized aerosols, with selected particle size in the 0.03-0.40 ?m range, was measured immediately prior to the start of the paraffin aerosol loading and at 1, 4, and 8 hr after the start of paraffin aerosol loading. Penetration through isopropanol-treated facepieces not oil paraffin loaded was also measured to evaluate facepiece behavior when electrostatic capture mechanisms are practically absent. During exposure to paraffin aerosol, DEHS penetration gradually increased for all aerosol sizes, and the most penetrating particle size (0.05 ?m at the beginning of exposure) shifted slightly to larger diameters. After the isopropanol treatment, the higher penetration value was 0.30 ?m. In addition to an increased penetration during paraffin loading at a given particle size, the relative degree of increase was greater as the particle size increased. Penetration value measured after 8 hr for 0.03-?m particles was on average 1.6 times the initial value, whereas it was about 8 times for 0.40-?m particles. This behavior, as well evidenced in the measurements of isopropanol-treated facepieces, can be attributed to the increasing action in particle capture of the electrostatic forces (Coulomb and polarization), which depend strictly on the diameter and electrical charge of neutralized aerosol particles. With reference to electrostatic filtering facepieces as personal protective equipment, results suggest the importance of complying with the manufacturer instructions when it is specified that their use has to be restricted to a single shift. PMID:22862434

Plebani, Carmela; Listrani, Stefano; Tranfo, Giovanna; Tombolini, Francesca

2012-01-01

281

Particle swarm optimization with recombination and dynamic linkage discovery.

In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system. PMID:18179066

Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung

2007-12-01

282

NASA Technical Reports Server (NTRS)

The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.

Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell

2012-01-01

283

Shipboard is not an absolute rigid body. Many factors could cause deformations which lead to large errors of mounted devices, especially for the navigation systems. Such errors should be estimated and compensated effectively, or they will severely reduce the navigation accuracy of the ship. In order to estimate the deformation, an unscented particle filter method for estimation of shipboard deformation based on an inertial measurement unit is presented. In this method, a nonlinear shipboard deformation model is built. Simulations demonstrated the accuracy reduction due to deformation. Then an attitude plus angular rate match mode is proposed as a frame to estimate the shipboard deformation using inertial measurement units. In this frame, for the nonlinearity of the system model, an unscented particle filter method is proposed to estimate and compensate the deformation angles. Simulations show that the proposed method gives accurate and rapid deformation estimations, which can increase navigation accuracy after compensation of deformation. PMID:24248280

Wang, Bo; Xiao, Xuan; Xia, Yuanqing; Fu, Mengyin

2013-01-01

284

NASA Technical Reports Server (NTRS)

Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.

Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel

2004-01-01

285

Particle Swarm Optimization of High-frequency Transformer

Particle Swarm Optimization of High-frequency Transformer Hengsi Qin, Jonathan W. Kimball, Ganesh K of Science and Technology, Rolla, MO 65409-0040 USA Abstract--A high frequency transformer is a critical transformer. Operation of a DAB converter requires its transformer to have a specific amount of winding

Kimball, Jonathan W.

286

Particle swarm optimization for reconfigurable phase-differentiated array design

Multiple-beam antenna arrays have important applica- tions in communications and radar. This paper describes a method of designing a reconfigurable dual-beam antenna array using a new evolu- tionary algorithm called particle swarm optimization (PSO). The design problem is to find element excitations that will result in a sector pattern main beam with low side lobes with the additional requirement that

Dennis Gies; Yahya Rahmat-Samii

2003-01-01

287

Comparison between Genetic Algorithms and Particle Swarm Optimization

This paper compares two evolutionary computation paradigms: genetic algorithms and particle swarm optimization. The operators of each paradigm are reviewed, focusing on how each affects search behavior in the problem space. The goals of the paper are to provide additional insights into how each paradigm works, and to suggest ways in which performance might be improved by incorporating features from

Russell C. Eberhart; Yuhui Shi

1998-01-01

288

An effective particle swarm optimization method for data clustering

Data clustering analysis is generally applied to image processing, customer relationship management and product family construction. This paper applied particle swarm optimization (PSO) algorithm on data clustering problems. Two reflex schemes are implemented on PSO algorithm to improve the efficiency. The proposed methods were tested on seven datasets, and their performance is compared with those of PSO, K-means and two

I. W. Kao; C. Y. Tsai; Y. C. Wang

2007-01-01

289

An approach to measure trace elements in particles collected on fiber filters using EDXRF.

A method developed for analyzes of large number of aerosol samples using Energy Dispersive X-Ray Fluorescence (EDXRF) and its performance were discussed in this manuscript. Atmospheric aerosol samples evaluated in this study were collected on cellulose fiber (Whatman-41) filters, employing a Hi-Vol sampler, at a monitoring station located on the Mediterranean coast of Turkey, between 1993 and 2001. Approximately 1700 samples were collected in this period. Six-hundred of these samples were analyzed by instrumental neutron activation (INAA), and the rest were archived. EDXRF was selected as an analytical technique to analyze 1700 aerosol samples because of its speed and non-destructive nature. However, analysis of aerosol samples collected on fiber filters with a surface technique such as EDXRF was a challenge. Penetration depth calculation performed in this study revealed that EDXRF can obtain information from top 150?m of our fiber filter material. Calibration of the instrument with currently available thin film standards caused unsatisfactory results since the actual penetration depth of particles into fiber filters were much deeper than 150?m. A method was developed in this manuscript to analyze fiber filter samples quickly with XRF. Two hundred samples that were analyzed by INAA were divided into two equal batches. One of these batches was used to calibrate the XRF and the second batch was used for verification. The results showed that developed method can be reliably used for routine analysis of fiber samples loaded with ambient aerosol. PMID:21147325

Oztrk, Fatma; Zararsiz, Abdullah; Kirmaz, Ridvan; Tuncel, Grdal

2011-01-15

290

A sensitive technique for the measurement of dissolved and particulate actinide concentrations and water column distributions is described. Pu, Am, and Th isotopes are collected using large-volume, wire-mounted electrical pumping systems. Particles were removed by filtration, and actinides by absorption on MnO2-coated filters. The very large volumes processed (up to 4000 liters) result in very sensitive and precise concentration measurements

H. D. Livingston; J. K. Cochran

1987-01-01

291

A Geometry-Based Particle Filtering Approach to White Matter Tractography

\\u000a We introduce a fibre tractography framework based on a particle filter which estimates a local geometrical model of the underlying\\u000a white matter tract, formulated as a streamline flow using generalized helicoids. The method is not dependent on the diffusion\\u000a model, and is applicable to diffusion tensor (DT) data as well as to high angular resolution reconstructions. The geometrical\\u000a model allows

Peter Savadjiev; Yogesh Rathi; James G. Malcolm; Martha Elizabeth Shenton; Carl-Fredrik Westin

2010-01-01

292

Tracking a walking person using activity-guided annealed particle filtering

Tracking human pose using observations from less than three cameras is a challenging task due to ambiguity in the available image evidence. This work presents a method for tracking using a pre-trained model of activity to guide sampling within an Annealed Particle Filtering framework. The approach is an example of model-based analysis-by- synthesis and is capable of robust tracking from

John Darby; Baihua Li; Nicholas Costen

2008-01-01

293

Multi-Modal Particle Filtering Tracking using Appearance, Motion and Audio Likelihoods

We propose a multi-modal object tracking algorithm that combines appearance, motion and audio information in a particle filter. The proposed tracker fuses at the likelihood level the audio-visual ob- servations captured with a video camera coupled with two micro- phones. Two video likelihoods are computed that are based on a 3D color histogram appearance model and on a color change

Matteo Bregonzio; Murtaza Taj; Andrea Cavallaro

2007-01-01

294

Radioactive particles are aggregates of radioactive atoms that may contain significant activity concentrations. They have been released into the environment from nuclear weapons tests, and from accidents and effluents associated with the nuclear fuel cycle. Aquatic filter-feeders can capture and potentially retain radioactive particles, which could then provide concentrated doses to nearby tissues. This study experimentally investigated the retention and effects of radioactive particles in the blue mussel, Mytilus edulis. Spent fuel particles originating from the Dounreay nuclear establishment, and collected in the field, comprised a U and Al alloy containing fission products such as (137)Cs and (90)Sr/(90)Y. Particles were introduced into mussels in suspension with plankton-food or through implantation in the extrapallial cavity. Of the particles introduced with food, 37% were retained for 70 h, and were found on the siphon or gills, with the notable exception of one particle that was ingested and found in the stomach. Particles not retained seemed to have been actively rejected and expelled by the mussels. The largest and most radioactive particle (estimated dose rate 3.18 0.06 Gyh(-1)) induced a significant increase in Comet tail-DNA %. In one case this particle caused a large white mark (suggesting necrosis) in the mantle tissue with a simultaneous increase in micronucleus frequency observed in the haemolymph collected from the muscle, implying that non-targeted effects of radiation were induced by radiation from the retained particle. White marks found in the tissue were attributed to ionising radiation and physical irritation. The results indicate that current methods used for risk assessment, based upon the absorbed dose equivalent limit and estimating the "no-effect dose" are inadequate for radioactive particle exposures. Knowledge is lacking about the ecological implications of radioactive particles released into the environment, for example potential recycling within a population, or trophic transfer in the food chain. PMID:25240099

Jaeschke, B C; Lind, O C; Bradshaw, C; Salbu, B

2015-01-01

295

Segmentation of Nerve Bundles and Ganglia in Spine MRI Using Particle Filters

Automatic segmentation of spinal nerve bundles that originate within the dural sac and exit the spinal canal is important for diagnosis and surgical planning. The variability in intensity, contrast, shape and direction of nerves seen in high resolution myelographic MR images makes segmentation a challenging task. In this paper, we present an automatic tracking method for nerve segmentation based on particle filters. We develop a novel approach to particle representation and dynamics, based on Bzier splines. Moreover, we introduce a robust image likelihood model that enables delineation of nerve bundles and ganglia from the surrounding anatomical structures. We demonstrate accurate and fast nerve tracking and compare it to expert manual segmentation. PMID:22003741

Dalca, Adrian; Danagoulian, Giovanna; Kikinis, Ron; Schmidt, Ehud; Golland, Polina

2011-01-01

296

Robust dead reckoning system for mobile robots based on particle filter and raw range scan.

Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318

Duan, Zhuohua; Cai, Zixing; Min, Huaqing

2014-01-01

297

Robust Dead Reckoning System for Mobile Robots Based on Particle Filter and Raw Range Scan

Robust dead reckoning is a complicated problem for wheeled mobile robots (WMRs), where the robots are faulty, such as the sticking of sensors or the slippage of wheels, for the discrete fault models and the continuous states have to be estimated simultaneously to reach a reliable fault diagnosis and accurate dead reckoning. Particle filters are one of the most promising approaches to handle hybrid system estimation problems, and they have also been widely used in many WMRs applications, such as pose tracking, SLAM, video tracking, fault identification, etc. In this paper, the readings of a laser range finder, which may be also interfered with by noises, are used to reach accurate dead reckoning. The main contribution is that a systematic method to implement fault diagnosis and dead reckoning in a particle filter framework concurrently is proposed. Firstly, the perception model of a laser range finder is given, where the raw scan may be faulty. Secondly, the kinematics of the normal model and different fault models for WMRs are given. Thirdly, the particle filter for fault diagnosis and dead reckoning is discussed. At last, experiments and analyses are reported to show the accuracy and efficiency of the presented method. PMID:25192318

Duan, Zhuohua; Cai, Zixing; Min, Huaqing

2014-01-01

298

Particles in swimming pool filters--does pH determine the DBP formation?

The formation was investigated for different groups of disinfection byproducts (DBPs) during chlorination of filter particles from swimming pools at different pH-values and the toxicity was estimated. Specifically, the formation of the DBP group trihalomethanes (THMs), which is regulated in many countries, and the non-regulated haloacetic acids (HAAs) and haloacetonitriles (HANs) were investigated at 6.0?pH?8.0, under controlled chlorination conditions. The investigated particles were collected from a hot tub with a drum micro filter. In two series of experiments with either constant initial active or initial free chlorine concentrations the particles were chlorinated at different pH-values in the relevant range for swimming pools. THM and HAA formations were reduced by decreasing pH while HAN formation increased with decreasing pH. Based on the organic content the relative DBP formation from the particles was higher than previously reported for body fluid analogue and filling water. The genotoxicity and cytotoxicity estimated from formation of DBPs from the treated particle suspension increased with decreasing pH. Among the quantified DBP groups the HANs were responsible for the majority of the toxicity from the measured DBPs. PMID:22285035

Hansen, Kamilla M S; Willach, Sarah; Mosbk, Hans; Andersen, Henrik R

2012-04-01

299

Optimal Design of CSD Coefficient FIR Filters Subject to Number of Nonzero Digits

NASA Astrophysics Data System (ADS)

In a hardware implementation of FIR(Finite Impulse Response) digital filters, it is desired to reduce a total number of nonzero digits used for a representation of filter coefficients. In general, a design problem of FIR filters with CSD(Canonic Signed Digit) representation, which is efficient one for the reduction of numbers of multiplier units, is often considered as one of the 0-1 combinational problems. In such the problem, some difficult constraints make us prevent to linearize the problem. Although many kinds of heuristic approaches have been applied to solve the problem, the solution obtained by such a manner could not guarantee its optimality. In this paper, we attempt to formulate the design problem as the 0-1 mixed integer linear programming problem and solve it by using the branch and bound technique, which is a powerful method for solving integer programming problem. Several design examples are shown to present an efficient performance of the proposed method.

Ozaki, Yuichi; Suyama, Kenji

300

Design of FIR Filters with Discrete Coefficients using Ant Colony Optimization

NASA Astrophysics Data System (ADS)

In this paper, we propose a new design method for linear phase FIR (Finite Impulse Response) filters with discrete coefficients. In a hardware implementation, filter coefficients must be represented as discrete values. The design problem of digital filters with discrete coefficients is formulated as the integer programming problem. Then, an enormous amount of computational time is required to solve the problem in a strict solver. Recently, ACO (Ant Colony Optimization) which is one heuristic approach, is used widely for solving combinational problem like the traveling salesman problem. In our method, we formulate the design problem as the 0-1 integer programming problem and solve it by using the ACO. Several design examples are shown to present effectiveness of the proposed method.

Tsutsumi, Shuntaro; Suyama, Kenji

301

Particle Swarm Optimization (PSO) is a highly efficient evolutionary optimization algorithm. In this paper a multiobjective optimization algorithm based on PSO applied to the optimal design of photovoltaic grid-connected systems (PVGCSs) is presented. The proposed methodology intends to suggest the optimal number of system devices and the optimal PV module installation details, such that the economic and environmental benefits achieved during the system's operational lifetime period are both maximized. The objective function describing the economic benefit of the proposed optimization process is the lifetime system's total net profit which is calculated according to the method of the Net Present Value (NPV). The second objective function, which corresponds to the environmental benefit, equals to the pollutant gas emissions avoided due to the use of the PVGCS. The optimization's decision variables are the optimal number of the PV modules, the PV modules optimal tilt angle, the optimal placement of the PV modules within the available installation area and the optimal distribution of the PV modules among the DC/AC converters. (author)

Kornelakis, Aris [Technical University of Crete, Department of Electronic and Computer Engineering, Chania (Greece)

2010-12-15

302

A novel chaos particle swarm optimization (PSO) and its application in pavement maintance decision

Particle swarm optimization (PSO) algorithm is a new random global optimization algorithm, and the simple PSO (SPSO) is short of high convergence speed, strong optimization ability and so on. To improve the optimization property of SPSO, a novel chaos particle swarm optimization (CPSO) algorithm is presented. The characteristics of ergodicity and randomness of chaotic variables are considered to produce the

Yi Shen; Yunfeng Bu; Mingxin Yuan

2009-01-01

303

All signals, except sine waves, exhibit intrinsic modulations that affect perceptual masking. Reducing the physical intrinsic modulations of a broadband signal does not necessarily have a perceptual impact: auditory filtering can reintroduce modulations. Broadband signals with low intrinsic modulations after auditory filtering have proved difficult to design. To that end, this paper introduces a class of signals termed pulse-spreading harmonic complexes (PSHCs). PSHCs are generated by summing harmonically related components with such a phase that the resulting waveform exhibits pulses equally-spaced within a repetition period. The order of a PSHC determines its pulse rate. Simulations with a gamma-tone filterbank suggest an optimal pulse rate at which, after auditory filtering, the PSHC's intrinsic modulations are lowest. These intrinsic modulations appear to be less than those for broadband pseudo-random (PR) or low-noise (LN) noise. This hypothesis was tested in a modulation-detection experiment involving five modulation rates ranging from 8 to 128?Hz and both broadband and narrowband carriers using PSHCs, PR, and LN noise. PSHC showed the lowest thresholds of all broadband signals. Results imply that optimized PSHCs exhibit less intrinsic modulations after auditory filtering than any other broadband signal previously considered. PMID:25190401

Hilkhuysen, Gaston; Macherey, Olivier

2014-09-01

304

NASA Astrophysics Data System (ADS)

SummaryHybrid data assimilation (DA) is greatly used in recent hydrology and water resources research. In this study, one newly introduced technique, the ensemble particle filter (EnPF), formed by coupling ensemble Kalman filter (EnKF) with particle filter (PF), is applied for a multi-layer soil moisture prediction in the Meilin watershed based on the support vector machines (SVMs). The data used in this paper includes six-layer soil moisture: 0-5 cm, 30 cm, 50 cm, 100 cm, 200 cm and 300 cm and five meteorological parameters: soil temperature at 5 cm and 20 cm, air temperature, relative humidity and solar radiation in the study area. In order to investigate this EnPF approach, another two filters, EnKF and PF are applied as another two data assimilation methods to conduct a comparison. In addition, the SVM model simulated data without updating with data assimilation technique is discussed as well to evaluate the data assimilation technique. Two experimental cases are explored here, one with 200 initial training ensemble members in the SVM training phase while the other with 1000 initial training ensemble members. Three main findings are obtained in this study: (1) the SVMs machine is a statistically sound and robust model for soil moisture prediction in both the surface and root zone layers, and the larger the initial training data ensemble, the more effective the operator derived; (2) data assimilation technique does improve the performance of SVM modeling; (3) EnPF outweighs the performance of other two filters as well as the SVM model; Moreover, the ability of EnPF and PF is not positively related to the resampling ensemble size, when the resampling size exceeds a certain amount, the performance of EnPF and PF would be degraded. Because the EnPF still performs well than EnKF, it can be used as a powerful data assimilation tool in the soil moisture prediction.

Yu, Zhongbo; Liu, Di; L, Haishen; Fu, Xiaolei; Xiang, Long; Zhu, Yonghua

2012-12-01

305

NASA Astrophysics Data System (ADS)

In this paper, a novel hybrid algorithm featuring a simple index modulation profile with fast-converging optimization is proposed towards the design of dense wavelength-division-multiplexing systems (DWDM) multichannel fiber Bragg grating (FBG) filters. The approach is based on utilizing one of other FBG design approaches that may suffer from spectral distortion as the first step, then performing Lagrange multiplier optimization (LMO) for optimized correction of the spectral distortion. In our design examples, the superposition method is employed as the first design step for its merits of easy fabrication, and the discrete layer-peeling (DLP) algorithm is used to rapidly obtain the initial index modulation profiles for the superposition method. On account of the initially near-optimum index modulation profiles from the first step, the LMO optimization algorithm shows fast convergence to the target reflection spectra in the second step and the design outcome still retains the advantage of easy fabrication.

Hsin, Chen-Wei

2011-07-01

306

Optimizing binary phase and amplitude filters for PCE, SNR, and discrimination

NASA Technical Reports Server (NTRS)

Binary phase-only filters (BPOFs) have generated much study because of their implementation on currently available spatial light modulator devices. On polarization-rotating devices such as the magneto-optic spatial light modulator (SLM), it is also possible to encode binary amplitude information into two SLM transmission states, in addition to the binary phase information. This is done by varying the rotation angle of the polarization analyzer following the SLM in the optical train. Through this parameter, a continuum of filters may be designed that span the space of binary phase and amplitude filters (BPAFs) between BPOFs and binary amplitude filters. In this study, we investigate the design of optimal BPAFs for the key correlation characteristics of peak sharpness (through the peak-to-correlation energy (PCE) metric), signal-to-noise ratio (SNR), and discrimination between in-class and out-of-class images. We present simulation results illustrating improvements obtained over conventional BPOFs, and trade-offs between the different performance criteria in terms of the filter design parameter.

Downie, John D.

1992-01-01

307

Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

NASA Technical Reports Server (NTRS)

Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses that may not be important in longer wavelength designs. This paper describes the design of multi- bandwidth filters operating in the 1-5 micrometer wavelength range. This work follows on a previous design. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using microlithographic techniques and used in spectral imaging applications will be presented.

Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

1998-01-01

308

Multi-Bandwidth Frequency Selective Surfaces for Near Infrared Filtering: Design and Optimization

NASA Technical Reports Server (NTRS)

Frequency selective surfaces are widely used in the microwave and millimeter wave regions of the spectrum for filtering signals. They are used in telecommunication systems for multi-frequency operation or in instrument detectors for spectroscopy. The frequency selective surface operation depends on a periodic array of elements resonating at prescribed wavelengths producing a filter response. The size of the elements is on the order of half the electrical wavelength, and the array period is typically less than a wavelength for efficient operation. When operating in the optical region, diffraction gratings are used for filtering. In this regime the period of the grating may be several wavelengths producing multiple orders of light in reflection or transmission. In regions between these bands (specifically in the infrared band) frequency selective filters consisting of patterned metal layers fabricated using electron beam lithography are beginning to be developed. The operation is completely analogous to surfaces made in the microwave and millimeter wave region except for the choice of materials used and the fabrication process. In addition, the lithography process allows an arbitrary distribution of patterns corresponding to resonances at various wavelengths to be produced. The design of sub-millimeter filters follows the design methods used in the microwave region. Exacting modal matching, integral equation or finite element methods can be used for design. A major difference though is the introduction of material parameters and thicknesses tha_ may not be important in longer wavelength designs. This paper describes the design of multi-bandwidth filters operating in the I-5 micrometer wavelength range. This work follows on previous design [1,2]. In this paper extensions based on further optimization and an examination of the specific shape of the element in the periodic cell will be reported. Results from the design, manufacture and test of linear wedge filters built using micro-lithographic techniques and used ir spectral imaging applications will be presented.

Cwik, Tom; Fernandez, Salvador; Ksendzov, A.; LaBaw, Clayton C.; Maker, Paul D.; Muller, Richard E.

1999-01-01

309

We present an approach for tracking a lecturer during the course of his speech. We use features from multiple cameras and micro- phones, and process them in a joint particle filter framework. The filter performs sampled projections of 3D location hypotheses and scores them using features from both audio and video. On the video side, the features are based on

Kai Nickel; Tobias Gehrig; Hazim K. Ekenel; John McDonough; Rainer Stiefelhagen

310

PCDD/F formation in an iron/potassium-catalyzed diesel particle filter.

Catalytic diesel particle filters (DPFs) have evolved to a powerful environmental technology. Several metal-based, fuel soluble catalysts, so-called fuel-borne catalysts (FBCs), were developed to catalyze soot combustion and support filter regeneration. Mainly iron- and cerium-based FBCs have been commercialized for passenger cars and heavy-duty vehicle applications. We investigated a new iron/potassium-based FBC used in combination with an uncoated silicon carbide filter and report effects on emissions of polychlorinated dibenzodioxins/furans (PCDD/Fs). The PCDD/F formation potential was assessed under best and worst case conditions, as required for filter approval under the VERT protocol. TEQ-weighted PCDD/F emissions remained low when using the Fe/K catalyst (37/7.5 ?g/g) with the filter and commercial, low-sulfur fuel. The addition of chlorine (10 ?g/g) immediately led to an intense PCDD/F formation in the Fe/K-DPF. TEQ-based emissions increased 51-fold from engine-out levels of 95 to 4800 pg I-TEQ/L after the DPF. Emissions of 2,3,7,8-TCDD, the most toxic congener (TEF = 1.0), increased 320-fold, those of 2,3,7,8-TCDF (TEF = 0.1) even 540-fold. Remarkable pattern changes were noticed, indicating a preferential formation of tetrachlorinated dibenzofurans. It has been shown that potassium acts as a structural promoter inducing the formation of magnetite (Fe3O4) rather than hematite (Fe2O3). This may alter the catalytic properties of iron. But the chemical nature of this new catalyst is yet unknown, and we are far from an established mechanism for this new pathway to PCDD/Fs. In conclusion, the iron/potassium-catalyzed DPF has a high PCDD/F formation potential, similar to the ones of copper-catalyzed filters, the latter are prohibited by Swiss legislation. PMID:23713673

Heeb, Norbert V; Zennegg, Markus; Haag, Regula; Wichser, Adrian; Schmid, Peter; Seiler, Cornelia; Ulrich, Andrea; Honegger, Peter; Zeyer, Kerstin; Emmenegger, Lukas; Bonsack, Peter; Zimmerli, Yan; Czerwinski, Jan; Kasper, Markus; Mayer, Andreas

2013-06-18

311

A New Particle Swarm Optimization Algorithm for Dynamic Environments

NASA Astrophysics Data System (ADS)

Many real world optimization problems are dynamic in which global optimum and local optima change over time. Particle swarm optimization has performed well to find and track optima in dynamic environments. In this paper, we propose a new particle swarm optimization algorithm for dynamic environments. The proposed algorithm utilizes a parent swarm to explore the search space and some child swarms to exploit promising areas found by the parent swarm. To improve the search performance, when the search areas of two child swarms overlap, the worse child swarms will be removed. Moreover, in order to quickly track the changes in the environment, all particles in a child swarm perform a random local search around the best position found by the child swarm after a change in the environment is detected. Experimental results on different dynamic environments modelled by moving peaks benchmark show that the proposed algorithm outperforms other PSO algorithms, including FMSO, a similar particle swarm algorithm for dynamic environments, for all tested environments.

Kamosi, Masoud; Hashemi, Ali B.; Meybodi, M. R.

312

Research of spatial high-pass filtering algorithm in particles real-time measurement system

NASA Astrophysics Data System (ADS)

With the application development of CIMS, enterprises have the more need of the CAQ systems during the process of flexibility and automation. Based the means of computer-based vision technology, Automated Visual Inspection (AVI) is a non-contact measurement mean synthesizing the technologies such as image processing, precision measurement. The particles real-time measurement system is the system which analyzes the target image obtained by the computer vision system and gets the useful measure information. In accordance with existing prior knowledge, the user can timely take some measures to reduce the floating ash. According to the analysis of the particle images, this paper researches the image high-pass filter means, Gradient arithmetic, with characteristics of images. In order to get rid of the interference of background and enhance the edge lines of particles, it uses the two directions kernel to process the images. This Spatial high-pass filtering algorithm also helps to conduct the ensuing image processing to obtain useful information of floating ash particles.

Jin, Xuanhong; Dai, Shuguang; Mu, Pingan

2010-08-01

313

Filter feeders and plankton increase particle encounter rates through flow regime control

Collisions between particles or between particles and other objects are fundamental to many processes that we take for granted. They drive the functioning of aquatic ecosystems, the onset of rain and snow precipitation, and the manufacture of pharmaceuticals, powders and crystals. Here, I show that the traditional assumption that viscosity dominates these situations leads to consistent and large-scale underestimation of encounter rates between particles and of deposition rates on surfaces. Numerical simulations reveal that the encounter rate is Reynolds number dependent and that encounter efficiencies are consistent with the sparse experimental data. This extension of aerosol theory has great implications for understanding of selection pressure on the physiology and ecology of organisms, for example filter feeders able to gather food at rates up to 5 times higher than expected. I provide evidence that filter feeders have been strongly selected to take advantage of this flow regime and show that both the predicted peak concentration and the steady-state concentrations of plankton during blooms are ?33% of that predicted by the current models of particle encounter. Many ecological and industrial processes may be operating at substantially greater rates than currently assumed. PMID:19416879

Humphries, Stuart

2009-01-01

314

Filter feeders and plankton increase particle encounter rates through flow regime control.

Collisions between particles or between particles and other objects are fundamental to many processes that we take for granted. They drive the functioning of aquatic ecosystems, the onset of rain and snow precipitation, and the manufacture of pharmaceuticals, powders and crystals. Here, I show that the traditional assumption that viscosity dominates these situations leads to consistent and large-scale underestimation of encounter rates between particles and of deposition rates on surfaces. Numerical simulations reveal that the encounter rate is Reynolds number dependent and that encounter efficiencies are consistent with the sparse experimental data. This extension of aerosol theory has great implications for understanding of selection pressure on the physiology and ecology of organisms, for example filter feeders able to gather food at rates up to 5 times higher than expected. I provide evidence that filter feeders have been strongly selected to take advantage of this flow regime and show that both the predicted peak concentration and the steady-state concentrations of plankton during blooms are approximately 33% of that predicted by the current models of particle encounter. Many ecological and industrial processes may be operating at substantially greater rates than currently assumed. PMID:19416879

Humphries, Stuart

2009-05-12

315

Multivariable optimization of liquid rocket engines using particle swarm algorithms

NASA Astrophysics Data System (ADS)

Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.

Jones, Daniel Ray

316

NASA Astrophysics Data System (ADS)

Diesel particle filters have become widely used in the United States since the introduction in 2007 of a more stringent exhaust particulate matter emission standard for new heavy-duty diesel vehicle engines. California has instituted additional regulations requiring retrofit or replacement of older in-use engines to accelerate emission reductions and air quality improvements. This presentation summarizes pollutant emission changes measured over several field campaigns at the Port of Oakland in the San Francisco Bay Area associated with diesel particulate filter use and accelerated modernization of the heavy-duty truck fleet. Pollutants in the exhaust plumes of hundreds of heavy-duty trucks en route to the Port were measured in 2009, 2010, 2011, and 2013. Ultrafine particle number, black carbon (BC), nitrogen oxides (NOx), and nitrogen dioxide (NO2) concentrations were measured at a frequency ? 1 Hz and normalized to measured carbon dioxide concentrations to quantify fuel-based emission factors (grams of pollutant emitted per kilogram of diesel consumed). The size distribution of particles in truck exhaust plumes was also measured at 1 Hz. In the two most recent campaigns, emissions were linked on a truck-by-truck basis to installed emission control equipment via the matching of transcribed license plates to a Port truck database. Accelerated replacement of older engines with newer engines and retrofit of trucks with diesel particle filters reduced fleet-average emissions of BC and NOx. Preliminary results from the two most recent field campaigns indicate that trucks without diesel particle filters emit 4 times more BC than filter-equipped trucks. Diesel particle filters increase emissions of NO2, however, and filter-equipped trucks have NO2/NOx ratios that are 4 to 7 times greater than trucks without filters. Preliminary findings related to particle size distribution indicate that (a) most trucks emitted particles characterized by a single mode of approximately 100 nm in diameter and (b) new trucks originally equipped with diesel particle filters were 5 to 6 times more likely than filter-retrofitted trucks and trucks without filters to emit particles characterized by a single mode in the range of 10 to 30 nm in diameter.

Kirchstetter, T.; Preble, C.; Dallmann, T. R.; DeMartini, S. J.; Tang, N. W.; Kreisberg, N. M.; Hering, S. V.; Harley, R. A.

2013-12-01

317

Two-dimensional (2D) optimal filter for highly nonstationary 2D signal estimation is developed. It is based on the real-time results of space\\/spatial-fre quency (S\\/SF) analysis, on the correspondence of filter's region of support (FRS) to the signal's local frequency (LF) and on the real time LF estimation algorithm, also proposed in this paper. The filter permits multiple FRS detection in the

Veselin N. Ivanovic; Nevena Radovic; Srdjan Jovanovski

2011-01-01

318

ECG compression using wavelet transform and particle swarm optimization.

A new adaptive thresholding mechanism to determine the significant wavelet coefficients of an electrocardiogram (ECG) signal is proposed. It is based on estimating thresholds for different sub-bands using the concept of energy packing efficiency (EPE). Then thresholds are optimized using the particle swarm optimization (PSO) algorithm to achieve a target compression ratio with minimum distortion. Simulation results on several records taken from the MIT-BIH Arrhythmia database show that the PSO converges exactly to the target compression after four iterations while the cost function achieved its minimum value after six iterations. Compared to previously published schemes, lower distortions are achieved for the same compression ratios. PMID:21476789

Alshamali, A; Al-Aqil, M

2011-01-01

319

Optimal Pid Tuning for Power System Stabilizers Using Adaptive Particle Swarm Optimization Technique

NASA Astrophysics Data System (ADS)

An application of the intelligent search technique to find optimal parameters of power system stabilizer (PSS) considering proportional-integral-derivative controller (PID) for a single-machine infinite-bus system is presented. Also, an efficient intelligent search technique, adaptive particle swarm optimization (APSO), is engaged to express usefulness of the intelligent search techniques in tuning of the PIDPSS parameters. Improve damping frequency of system is optimized by minimizing an objective function with adaptive particle swarm optimization. At the same operating point, the PIDPSS parameters are also tuned by the Ziegler-Nichols method. The performance of proposed controller compared to the conventional Ziegler-Nichols PID tuning controller. The results reveal superior effectiveness of the proposed APSO based PID controller.

Oonsivilai, Anant; Marungsri, Boonruang

2008-10-01

320

Based on a cascaded Kalman-Particle Filtering, gyroscope drift and robot attitude estimation method is proposed in this paper. Due to noisy and erroneous measurements of MEMS gyroscope, it is combined with Photogrammetry based vision navigation scenario. Quaternions kinematics and robot angular velocity dynamics with augmented drift dynamics of gyroscope are employed as system state space model. Nonlinear attitude kinematics, drift and robot angular movement dynamics each in 3 dimensions result in a nonlinear high dimensional system. To reduce the complexity, we propose a decomposition of system to cascaded subsystems and then design separate cascaded observers. This design leads to an easier tuning and more precise debugging from the perspective of programming and such a setting is well suited for a cooperative modular system with noticeably reduced computation time. Kalman Filtering (KF) is employed for the linear and Gaussian subsystem consisting of angular velocity and drift dynamics together with gyroscope measurement. The estimated angular velocity is utilized as input of the second Particle Filtering (PF) based observer in two scenarios of stochastic and deterministic inputs. Simulation results are provided to show the efficiency of the proposed method. Moreover, the experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. PMID:24342270

Sadaghzadeh N, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin

2014-03-01

321

NASA Astrophysics Data System (ADS)

Underground flow systems, such as oil or gas reservoirs and CO2 storage sites, are an important and challenging class of complex dynamic systems. Lacking information about distributed systems properties (such as porosity, permeability,...) leads to model uncertainties up to a level where quantification of uncertainties may become the dominant question in application tasks. History matching to past production data becomes an extremely important issue in order to improve the confidence of prediction. The accuracy of history matching depends on the quality of the established physical model (including, e.g. seismic, geological and hydrodynamic characteristics, fluid properties etc). The history matching procedure itself is very time consuming from the computational point of view. Even one single forward deterministic simulation may require parallel high-performance computing. This fact makes a brute-force non-linear optimization approach not feasible, especially for large-scale simulations. We present a novel framework for history matching which takes into consideration the nonlinearity of the model and of inversion, and provides a cheap but highly accurate tool for reducing prediction uncertainty. We propose an advanced framework for history matching based on the polynomial chaos expansion (PCE). Our framework reduces complex reservoir models and consists of two main steps. In step one, the original model is projected onto a so-called integrative response surface via very recent PCE technique. This projection is totally non-intrusive (following a probabilistic collocation method) and optimally constructed for available reservoir data at the prior stage of Bayesian updating. The integrative response surface keeps the nonlinearity of the initial model at high order and incorporates all suitable parameters, such as uncertain parameters (porosity, permeability etc.) and design or control variables (injection rate, depth etc.). Technically, the computational costs for constructing the response surface depend on the number of parameters and the expansion degree. Step two consists of Bayesian updating in order to match the reduced model to available measurements of state variables or other past or real-time observations of system behavior (e.g. past production data or pressure at monitoring wells during a certain time period). In step 2 we apply particle filtering on the integrative response surface constructed at step one. Particle filtering is a strong technique for Bayesian updating which takes into consideration the nonlinearity of inverse problem in history matching more accurately than Ensemble Kalman filter do. Thanks to the computational efficiency of PCE and integrative response surface, Bayesian updating for history matching becomes an interactive task and can incorporate real time measurements.

Oladyshkin, S.; Class, H.; Helmig, R.; Nowak, W.

2011-12-01

322

As the use of approximations is often the only way to deal with the optimization of complex structures, this paper discusses the use of Kalman filtering as a new approach for building global approximations. Basic ideas and procedures of Kalman filters are first recalled. Next, key elements of how to implement the method for design problems are described. Finally, in

E. Lemenager; T. Bouet; V. Braibant

1997-01-01

323

Numerical experiments with an implicit particle filter for the shallow water equations

NASA Astrophysics Data System (ADS)

The estimation of initial conditions for the shallow water equations for a given set of later data is a well known test problem for data assimilation codes. A popular approach to this problem is the variational method (4D-Var), i.e. the computation of the mode of the posterior probability density function (pdf) via the adjoint technique. Here, we improve on 4D-Var by computing the conditional mean (the minimum least square error estimator) rather than the mode (a biased estimator) and we do so with implicit sampling, a Monte Carlo (MC) importance sampling method. The idea in implicit sampling is to first search for the high-probability region of the posterior pdf and then to find samples in this region. Because the samples are concentrated in the high-probability region, fewer samples are required than with competing MC schemes. The search for the high-probability region can be implemented by a minimization that is very similar to the minimization in 4D-Var, and we make use of a 4D-Var code in our implementation. The samples are obtained by solving algebraic equations with a random right-hand-side. These equations can be solved efficiently, so that the additional cost of our approach, compared to traditional 4D-Var, is small. The long-term goal is to assimilate experimental data, obtained with the CORIOLIS turntable in Grenoble (France), to study the drift of a vortex. We present results from numerical twin experiments as a first step towards our long-term goal. We discretize the shallow water equations on a square domain (2.5m 2.5m) using finite differences on a staggered grid of size 28 28 and a fourth order Runge-Kutta. We assume open boundary conditions and estimate the initial state (velocities and surface height) given noisy observations of the state. We solve the optimization problem using a 4D-Var code that relies on a L-BFGS method; the random algebraic equations are solved with random maps, i.e. we look for solutions in given, but random, directions of the state space. In our numerical experiments, we varied the availability of the data (in both space and time) as well as the variance of the observation noise. We found that the implicit particle filter is reliable and efficient in all scenarios we considered. The implicit sampling method could improve the accuracy of the traditional variational approach. Moreover, we obtain quantitative measures of the uncertainty of the state estimate ``for free,'' while no information about the uncertainty is easily available using the traditional 4D-Var method only.

Souopgui, I.; Chorin, A. J.; Hussaini, M.

2012-12-01

324

Searching for coordinated activity cliffs using particle swarm optimization.

Activity cliffs are formed by structurally similar compounds having large potency differences. Coordinated activity cliffs evolve when compounds within groups of structural neighbors form multiple cliffs with different partners, giving rise to local networks of cliffs in a data set. Using particle swarm optimization, a machine learning approach, we systematically searched for coordinated activity cliffs in different compound sets. Regardless of the global SAR characteristics of these data sets, coordinated activity cliffs introducing strong local SAR discontinuity were identified in most cases. Compound subsets forming coordinated activity cliffs represent centers of SAR discontinuity and have high SAR information content. Through particle swarm optimization guided by subset discontinuity scoring, compounds forming the largest coordinated activity cliffs can automatically be extracted from large compound data sets. PMID:22404190

Namasivayam, Vigneshwaran; Bajorath, Jrgen

2012-04-23

325

A Geometry-Based Particle Filtering Approach to White Matter Tractography

We introduce a fibre tractography framework based on a particle filter which estimates a local geometrical model of the underlying white matter tract, formulated as a `streamline flow' using generalized helicoids. The method is not dependent on the diffusion model, and is applicable to diffusion tensor (DT) data as well as to high angular resolution reconstructions. The geometrical model allows for a robust inference of local tract geometry, which, in the context of the causal filter estimation, guides tractography through regions with partial volume effects. We validate the method on synthetic data and present results on two types in vivo data: diffusion tensors and a spherical harmonic reconstruction of the fibre orientation distribution function (fODF). PMID:20879320

Savadjiev, Peter; Rathi, Yogesh; Malcolm, James G.; Shenton, Martha E.; Westin, Carl-Fredrik

2011-01-01

326

Application of particle swarm optimization algorithm to image texture classification

NASA Astrophysics Data System (ADS)

This paper describes a kind of robust texture feature invariant to rotation and scale changes, which is the texture energy associated with a mask generated by particle swarm optimization algorithms. The detail procedure and algorithm to generate the mask is discussed in the paper. Furthermore, feature extraction experiments on aerial images are done. Experimental results indicate that the robust feature is effective and PSO-based algorithm is a viable approach for the "tuned" mask training problem.

Ye, Zhiwei; Zheng, Zhaobao; Zhang, Jinping; Yu, Xin

2007-12-01

327

This work aimed to inform the design of ceramic pot filters to be manufactured by the organization Pure Home Water (PHW) in Northern Ghana, and to model the flow through an innovative paraboloid-shaped ceramic pot filter. ...

Miller, Travis Reed

2010-01-01

328

The use of an inert, radioactively labeled microsphere as a measure of particle accumulation (filtration activity) by Mulinia lateralis (Say) and Mytilus edulis L. was evaluated. Bottom sediment plus temperature and salinity of the water were varied to induce changes in filtratio...

329

Optimal estimation of diffusion coefficients from single-particle trajectories

NASA Astrophysics Data System (ADS)

How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.

Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik

2014-02-01

330

Optimizing Magnetite Nanoparticles for Mass Sensitivity in Magnetic Particle Imaging

Purpose: Magnetic particle imaging (MPI), using magnetite nanoparticles (MNPs) as tracer material, shows great promise as a platform for fast tomographic imaging. To date, the magnetic properties of MNPs used in imaging have not been optimized. As nanoparticle magnetism shows strong size dependence, we explore how varying MNP size impacts imaging performance in order to determine optimal MNP characteristics for MPI at any driving field frequency, ?. Methods: Monodisperse MNPs of varying size were synthesized and their magnetic properties characterized. Their MPI response was measured experimentally, at an arbitrarily chosen ? = 250 kHz, using a custom-built MPI transceiver designed to detect the third harmonic of MNP magnetization. Results were interpreted using a model of dynamic MNP magnetization that is based on the Langevin theory of superparamagnetism and accounts for sample size distribution, and size-dependent magnetic relaxation. Results: Our experimental results show clear variation in the MPI signal intensity as a function of MNP size that is in good agreement with modeled results. A maxima in the plot of MPI signal vs. MNP size indicates there is a particular size that is optimal for the chosen frequency of 250 kHz. Conclusions: For MPI at any chosen frequency, there will exist a characteristic particle size that generates maximum signal amplitude. We illustrate this at 250 kHz with particles of 15 nm core diameter.

Ferguson, R. Matthew; Minard, Kevin R.; Khandhar, Amit P.; Krishnan, Kannan M.

2011-03-01

331

Parameter estimation using Multiobjective Particle Swarm Optimization (MOPSO)

NASA Astrophysics Data System (ADS)

In the current application, a multiobjective optimization approach is presented for estimation of parameters of hydrologic models. The complexity of hydrologic processes demands efficient and effective tools to fully determine system characteristics. A relatively new optimization algorithm, known as particle swarm optimization (PSO) has been employed here for parameter estimation. The PSO algorithm comes from the family of evolutionary computation techniques and has been applied in various other fields. The approach was initially devised for a single objective function, but in the current application we introduce a multiobjective algorithm, called multiobjective particle swarm optimization (MOPSO), and test it on two different kinds of modeling efforts in hydrology, namely a support vector machine (SVM) model for predicting soil moisture, and a well known conceptual rainfall-runoff (CRR) model, the Sacramento Soil Moisture Accounting (SAC-SMA) model, for estimating streamflow. The algorithm is modified to address multiobjective problems by introducing the Pareto rank concept. The performance of the algorithm is also tested for two test functions.

Gill, M.

2005-12-01

332

Physical principle for optimizing electrophoretic separation of charged particles

NASA Astrophysics Data System (ADS)

Electrophoresis is one of the most important methods for separating colloidal particles, carbohydrates, pharmaceuticals, and biological molecules such as DNA, RNA, proteins, in terms of their charge (or size). This method relies on the correlation between the particle drift velocity and the charge (or size). For a high-resolution separation, we need to minimize fluctuations of the drift velocity of particles or molecules. For a high throughput, on the other hand, we need a concentrated solution, in which many-body electrostatic and hydrodynamic interactions may increase velocity fluctuations. Thus, it is crucial to reveal what physical factors destabilize the coherent electrophoretic motion of charged particles. However, this is not an easy task due to complex dynamic couplings between particle motion, hydrodynamic flow, and motion of ion clouds. Here we study this fundamental problem using numerical simulations. We reveal that addition of salt screens both electrostatic and hydrodynamic interactions, but in a different manner. This allows us to minimize the fluctuations of the particle drift velocity for a particular salt concentration. This may have an impact not only on the basic physical understanding of dynamics of driven charged colloids, but also on the optimization of electrophoretic separation.

Araki, Takeaki; Tanaka, Hajime

2008-04-01

333

Fault diagnosis of sensor by chaos particle swarm optimization algorithm and support vector machine

Fault diagnosis of sensor timely and accurately is very important to improve the reliable operation of systems. In the study, fault diagnosis of sensor by chaos particle swarm optimization algorithm and support vector machine is presented in the paper, where chaos particle swarm optimization is chosen to determine the parameters of SVM. Chaos particle swarm optimization is a kind of

Chenglin Zhao; Xuebin Sun; Songlin Sun; Ting Jiang

2011-01-01

334

Combining classical Kalman filter with NIR analysis technology, a new method of characteristic wavelength variable selection, namely Kalman filtering method, is presented. The principle of Kalman filter for selecting optimal wavelength variable was analyzed. The wavelength selection algorithm was designed and applied to NIR detection of soybean oil acid value. First, the PLS (partial leastsquares) models were established by using different absorption bands of oil. The 4 472-5 000 cm(-1) characteristic band of oil acid value, including 132 wavelengths, was selected preliminarily. Then the Kalman filter was used to select characteristic wavelengths further. The PLS calibration model was established using selected 22 characteristic wavelength variables, the determination coefficient R2 of prediction set and RMSEP (root mean squared error of prediction) are 0.970 8 and 0.125 4 respectively, equivalent to that of 132 wavelengths, however, the number of wavelength variables was reduced to 16.67%. This algorithm is deterministic iteration, without complex parameters setting and randomicity of variable selection, and its physical significance was well defined. The modeling using a few selected characteristic wavelength variables which affected modeling effect heavily, instead of total spectrum, can make the complexity of model decreased, meanwhile the robustness of model improved. The research offered important reference for developing special oil near infrared spectroscopy analysis instruments on next step. PMID:25007608

Wang, Li-Qi; Ge, Hui-Fang; Li, Gui-Bin; Yu, Dian-Yu; Hu, Li-Zhi; Jiang, Lian-Zhou

2014-04-01

335

Optimal hydrograph separation filter to evaluate transport routines of hydrological models

NASA Astrophysics Data System (ADS)

Hydrograph separation (HS) using recursive digital filter approaches focuses on trying to distinguish between the rapidly occurring discharge components like surface runoff, and the slowly changing discharge originating from interflow and groundwater. Filter approaches are mathematical procedures, which perform the HS using a set of separation parameters. The first goal of this study is an attempt to minimize the subjective influence that a user of the filter technique exerts on the results by the choice of such filter parameters. A simple optimal HS (OHS) technique for the estimation of the separation parameters was introduced, relying on measured stream hydrochemistry. The second goal is to use the OHS parameters to develop a benchmark model that can be used as a geochemical model itself, or to test the performance of process based hydro-geochemical models. The benchmark model quantifies the degree of knowledge that the stream flow time series itself contributes to the hydrochemical analysis. Results of the OHS show that the two HS fractions ("rapid" and "slow") differ according to the geochemical substances which were selected. The OHS parameters were then used to demonstrate how to develop benchmark model for hydro-chemical predictions. Finally, predictions of solute transport from a process-based hydrological model were compared to the proposed benchmark model. Our results indicate that the benchmark model illustrated and quantified the contribution of the modeling procedure better than only using traditional measures like r2 or the Nash-Sutcliffe efficiency.

Rimmer, Alon; Hartmann, Andreas

2014-05-01

336

The application of uniform design in parameter establishment of particle swarm optimization

Satisfactory results have been acquired in functional optimization by adopting particle swarm optimization. Its significant feature is simpler expression, less parameters and easier operation. However, selection of key parameters has great influence on algorithm effects. The enactment of the parameters of particle swarm optimization has been determined by experience and experiment. This leads to heavy work and makes the optimal

Shanhe Jiang; Yusheng Chen; Julang Jiang; Qishen Wang

2008-01-01

337

processing is considered the last frontier in the battle for improved cellular systems and smart antennas arrays [7] by setting the distance between the elements. However, for the case of smart antennasAdaptive Radiation Pattern Optimization for Antenna Arrays by Phase Perturbations using Particle

Arslan, Tughrul

338

Particle Swarm Optimization and Fitness Sharing to solve Multi-Objective Optimization Problems

in the way birds travel when trying to find sources of food, or similarly the way a fish school will behave.e.rowe@cs.bham.ac.uk Abstract- The particle swarm optimization algorithm has been shown to be a competitive heuristic to solve developed by Kennedy and Eberhart [14], is basically inspired by bird flocking. The main idea is based

Coello, Carlos A. Coello

339

The proportional-integral-derivative (PID) controllers were the most popular controllers of this century because of their remarkable effectiveness, simplicity of implementation and broad applicability. However, PID controllers are poorly tuned in practice with most of the tuning done manually which is difficult and time consuming. The computational intelligence has purposed genetic algorithms (GA) and particle swarm optimization (PSO) as opened paths

Mohammed El-Said El-Telbany

2007-01-01

340

This paper presents a probabilistic method for active localization of needle and targets in robotic image guided interventions. Specifically, an active localization scenario where the system directly controls the imaging system to actively localize the needle and target locations using intra-operative medical imaging (e.g., computerized tomography and ultrasound imaging) is explored. In the proposed method, the active localization problem is posed as an information maximization problem, where the beliefs for the needle and target states are represented and estimated using particle filters. The proposed method is also validated using a simulation study. PMID:25383257

Renfrew, Mark; Bai, Zhuofu; avu?o?lu, M. Cenk

2014-01-01

341

Particle swarm optimization of ascent trajectories of multistage launch vehicles

NASA Astrophysics Data System (ADS)

Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state and costate components, the coast duration, and the upper stage thrust duration. In addition, a simple approach is introduced and successfully applied with the purpose of satisfying exactly the path constraint related to the maximum dynamical pressure in the atmospheric phase. The basic version of the swarming technique, which is used in this research, is extremely simple and easy to program. Nevertheless, the algorithm proves to be capable of yielding the optimal rocket trajectory with a very satisfactory numerical accuracy.

Pontani, Mauro

2014-02-01

342

Particle swarm optimization for the clustering of wireless sensors

NASA Astrophysics Data System (ADS)

Clustering is necessary for data aggregation, hierarchical routing, optimizing sleep patterns, election of extremal sensors, optimizing coverage and resource allocation, reuse of frequency bands and codes, and conserving energy. Optimal clustering is typically an NP-hard problem. Solutions to NP-hard problems involve searches through vast spaces of possible solutions. Evolutionary algorithms have been applied successfully to a variety of NP-hard problems. We explore one such approach, Particle Swarm Optimization (PSO), an evolutionary programming technique where a 'swarm' of test solutions, analogous to a natural swarm of bees, ants or termites, is allowed to interact and cooperate to find the best solution to the given problem. We use the PSO approach to cluster sensors in a sensor network. The energy efficiency of our clustering in a data-aggregation type sensor network deployment is tested using a modified LEACH-C code. The PSO technique with a recursive bisection algorithm is tested against random search and simulated annealing; the PSO technique is shown to be robust. We further investigate developing a distributed version of the PSO algorithm for clustering optimally a wireless sensor network.

Tillett, Jason C.; Rao, Raghuveer M.; Sahin, Ferat; Rao, T. M.

2003-07-01

343

Particle Swarm and Ant Colony Approaches in Multiobjective Optimization

NASA Astrophysics Data System (ADS)

The social behavior of groups of birds, ants, insects and fish has been used to develop evolutionary algorithms known as swarm intelligence techniques for solving optimization problems. This work presents the development of strategies for the application of two of the popular swarm intelligence techniques, namely the particle swarm and ant colony methods, for the solution of multiobjective optimization problems. In a multiobjective optimization problem, the objectives exhibit a conflicting nature and hence no design vector can minimize all the objectives simultaneously. The concept of Pareto-optimal solution is used in finding a compromise solution. A modified cooperative game theory approach, in which each objective is associated with a different player, is used in this work. The applicability and computational efficiencies of the proposed techniques are demonstrated through several illustrative examples involving unconstrained and constrained problems with single and multiple objectives and continuous and mixed design variables. The present methodologies are expected to be useful for the solution of a variety of practical continuous and mixed optimization problems involving single or multiple objectives with or without constraints.

Rao, S. S.

2010-10-01

344

Several studies have shown the importance of particle losses in real homes due to deposition and filtration; however, none have quantitatively shown the impact of using a central forced air fan and in-duct filter on particle loss rates. In an attempt to provide such data, we me...

345

In this study, the feasibility of spatial filter velocimetry (SFV) as process analytical technology tool for the in-line monitoring of the particle size distribution during top spray fluidized bed granulation was examined. The influence of several process (inlet air temperature during spraying and drying) and formulation variables (HPMC and Tween 20 concentration) upon the particle size distribution during processing, and

A. Burggraeve; T. Van Den Kerkhof; M. Hellings; J. P. Remon; C. Vervaet; T. De Beer

2010-01-01

346

NASA Astrophysics Data System (ADS)

A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.

Kiani, Maryam; Pourtakdoust, Seid H.

2014-12-01

347

A radiative transfer scheme that considers absorption, scattering, and distribution of light-absorbing elemental carbon (EC) particles collected on a quartz-fiber filter was developed to explain simultaneous filter reflectance and transmittance observations prior to and during...

348

Discrete Particle Swarm Optimization with Scout Particles for Library Materials Acquisition

Materials acquisition is one of the critical challenges faced by academic libraries. This paper presents an integer programming model of the studied problem by considering how to select materials in order to maximize the average preference and the budget execution rate under some practical restrictions including departmental budget, limitation of the number of materials in each category and each language. To tackle the constrained problem, we propose a discrete particle swarm optimization (DPSO) with scout particles, where each particle, represented as a binary matrix, corresponds to a candidate solution to the problem. An initialization algorithm and a penalty function are designed to cope with the constraints, and the scout particles are employed to enhance the exploration within the solution space. To demonstrate the effectiveness and efficiency of the proposed DPSO, a series of computational experiments are designed and conducted. The results are statistically analyzed, and it is evinced that the proposed DPSO is an effective approach for the studied problem. PMID:24072983

Lin, Bertrand M. T.

2013-01-01

349

A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants. PMID:24723827

Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

2014-01-01

350

Optimal Shape in Electromagnetic Scattering by Small Aspherical Particles

NASA Astrophysics Data System (ADS)

We consider the question of optimal shape for scattering by randomly oriented particles, e.g., shape causing minimal extinction among those of equal volume. Guided by the isoperimetric property of a sphere, relevant in the geometrical optics limit of scattering by large particles, we examine an analogous question in the low frequency (electrostatics) approximation, seeking to disentangle electric and geometric contributions. To that end, we survey the literature on shape functionals and focus on ellipsoids, giving a simple proof of spherical optimality for the coated ellipsoidal particle. Monotonic increase with asphericity in the low frequency regime for orientation-averaged induced dipole moments and scattering cross-sections is also established. Additional physical insight is obtained from the Rayleigh-Gans (transparent) limit and eccentricity expansions. We propose linking low and high frequency regime in a single minimum principle valid for all size parameters, provided that reasonable size distributions wash out the resonances for inter-mediate size parameters. This proposal is further supported by the sum rule for integrated extinction. Implications for spectro-polarimetric scattering are explicitly considered.

Kostinski, A. B.; Mongkolsittisilp, A.

2013-12-01

351

Assimilation of microwave brightness temperatures for soil moisture estimation using particle filter

NASA Astrophysics Data System (ADS)

Soil moisture plays a significant role in global water cycles. Both model simulations and remote sensing observations have their limitations when estimating soil moisture on a large spatial scale. Data assimilation (DA) is a promising tool which can combine model dynamics and remote sensing observations to obtain more precise ground soil moisture distribution. Among various DA methods, the particle filter (PF) can be applied to non-linear and non-Gaussian systems, thus holding great potential for DA. In this study, a data assimilation scheme based on the residual resampling particle filter (RR-PF) was developed to assimilate microwave brightness temperatures into the macro-scale semi-distributed Variance Infiltration Capacity (VIC) Model to estimate surface soil moisture. A radiative transfer model (RTM) was used to link brightness temperatures with surface soil moisture. Finally, the data assimilation scheme was validated by experimental data obtained at Arizona during the Soil Moisture Experiment 2004 (SMEX04). The results show that the estimation accuracy of soil moisture can be improved significantly by RR-PF through assimilating microwave brightness temperatures into VIC model. Both the overall trends and specific values of the assimilation results are more consistent with ground observations compared with model simulation results.

Bi, H. Y.; Ma, J. W.; Qin, S. X.; Zeng, J. Y.

2014-03-01

352

A computational procedure is described for assigning the absolute hand of the structure of a protein or assembly determined by single-particle electron microscopy. The procedure requires a pair of micrographs of the same particle field recorded at two tilt angles of a single tilt-axis specimen holder together with the three-dimensional map whose hand is being determined. For orientations determined from particles on one micrograph using the map, the agreement (average phase residual) between particle images on the second micrograph and map projections is determined for all possible choices of tilt angle and axis. Whether the agreement is better at the known tilt angle and axis of the microscope or its inverse indicates whether the map is of correct or incorrect hand. An increased discrimination of correct from incorrect hand (free hand difference), as well as accurate identification of the known values for the tilt angle and axis, can be used as targets for rapidly optimizing the search or refinement procedures used to determine particle orientations. Optimized refinement reduces the tendency for the model to match noise in a single image, thus improving the accuracy of the orientation determination and therefore the quality of the resulting map. The hand determination and refinement optimization procedure is applied to image pairs of the dihydrolipoyl acetyltransferase (E2) catalytic core of the pyruvate dehydrogenase complex from Bacillus stearothermophilus taken by low-dose electron cryomicroscopy. Structure factor amplitudes of a three-dimensional map of the E2 catalytic core obtained by averaging untilted images of 3667 icosahedral particles are compared to a scattering reference using a Guinier plot. A noise-dependent structure factor weight is derived and used in conjunction with a temperature factor (B=-1000A(2)) to restore high-resolution contrast without amplifying noise and to visualize molecular features to 8.7A resolution, according to a new objective criterion for resolution assessment proposed here. PMID:14568533

Rosenthal, Peter B; Henderson, Richard

2003-10-31

353

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383

Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

2013-01-01

354

NASA Astrophysics Data System (ADS)

Pt-Pd alloy nanoparticles, as potential catalyst candidates for new-energy resources such as fuel cells and lithium ion batteries owing to their excellent reactivity and selectivity, have aroused growing attention in the past years. Since structure determines physical and chemical properties of nanoparticles, the development of a reliable method for searching the stable structures of Pt-Pd alloy nanoparticles has become of increasing importance to exploring the origination of their properties. In this article, we have employed the particle swarm optimization algorithm to investigate the stable structures of alloy nanoparticles with fixed shape and atomic proportion. An improved discrete particle swarm optimization algorithm has been proposed and the corresponding scheme has been presented. Subsequently, the swap operator and swap sequence have been applied to reduce the probability of premature convergence to the local optima. Furthermore, the parameters of the exchange probability and the 'particle' size have also been considered in this article. Finally, tetrahexahedral Pt-Pd alloy nanoparticles has been used to test the effectiveness of the proposed method. The calculated results verify that the improved particle swarm optimization algorithm has superior convergence and stability compared with the traditional one.

Shao, Gui-Fang; Wang, Ting-Na; Liu, Tun-Dong; Chen, Jun-Ren; Zheng, Ji-Wen; Wen, Yu-Hua

2015-01-01

355

NASA Astrophysics Data System (ADS)

We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

Pek?en, Ertan; Yas, Trker; K?yak, Alper

2014-09-01

356

Panorama parking assistant system with improved particle swarm optimization method

NASA Astrophysics Data System (ADS)

A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.

Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong

2013-10-01

357

Kalman-Filter Observer Design around Optimal Control Policy for Gas Pipelines

NASA Astrophysics Data System (ADS)

Seeking the optimal operating policy by an off-line controller for pipelines carrying natural gas has an inherent state estimation problem associated with deviations from demand forecast. This paper presents a Kalman-filter-based observer for the real-time estimation of deviations from the states previously obtained by an off-line controller optimally, around an expected demand function. The observer is based on the linearized form of the non-linear partial differential equations which are the state space representation of isothermal and unidirectional gas flow through a pipeline. Data for the observer are produced by a dynamic simulator. The simulator and linearized observer equations are solved using an implicit finite element method. The observer has been tested on a pipeline subject to certain deviations from demand forecast. It converge s in a short span of time.

Durgut, Smal; Leblebcolu, Kemal

1997-01-01

358

Optimal steering of inertial particles diffusing anisotropically with losses

Exploiting a fluid dynamic formulation for which a probabilistic counterpart might not be available, we extend the theory of Schroedinger bridges to the case of inertial particles with losses and general, possibly singular diffusion coefficient. We find that, as for the case of constant diffusion coefficient matrix, the optimal control law is obtained by solving a system of two p.d.e.'s involving adjoint operators and coupled through their boundary values. In the linear case with quadratic loss function, the system turns into two matrix Riccati equations with coupled split boundary conditions. An alternative formulation of the control problem as a semidefinite programming problem allows computation of suboptimal solutions. This is illustrated in one example of inertial particles subject to a constant rate killing.

Yongxin Chen; Tryphon T. Georgiou; Michele Pavon

2014-10-07

359

A Software Tool for Data Clustering Using Particle Swarm Optimization

NASA Astrophysics Data System (ADS)

Many universities all over the world have been offering courses on swarm intelligence from 1990s. Particle Swarm Optimization is a swarm intelligence technique. It is relatively young, with a pronounce need for a mature teaching method. This paper presents an educational software tool in MATLAB to aid the teaching of PSO fundamentals and its applications to data clustering. This software offers the advantage of running the classical K-Means clustering algorithm and also provides facility to simulate hybridization of K-Means with PSO to explore better clustering performances. The graphical user interfaces are user-friendly and offer good learning scope to aspiring learners of PSO.

Manda, Kalyani; Hanuman, A. Sai; Satapathy, Suresh Chandra; Chaganti, Vinaykumar; Babu, A. Vinaya

360

Particle Swarm Optimization with Watts-Strogatz Model

NASA Astrophysics Data System (ADS)

Particle swarm optimization (PSO) is a popular swarm intelligent methodology by simulating the animal social behaviors. Recent study shows that this type of social behaviors is a complex system, however, for most variants of PSO, all individuals lie in a fixed topology, and conflict this natural phenomenon. Therefore, in this paper, a new variant of PSO combined with Watts-Strogatz small-world topology model, called WSPSO, is proposed. In WSPSO, the topology is changed according to Watts-Strogatz rules within the whole evolutionary process. Simulation results show the proposed algorithm is effective and efficient.

Zhu, Zhuanghua

361

Generating optimal initial conditions for smooth particle hydrodynamics (SPH) simulations

We present a new optimal method to set up initial conditions for Smooth Particle Hydrodynamics Simulations, which may also be of interest for N-body simulations. This new method is based on weighted Voronoi tesselations (WVTs) and can meet arbitrarily complex spatial resolution requirements. We conduct a comprehensive review of existing SPH setup methods, and outline their advantages, limitations and drawbacks. A serial version of our WVT setup method is publicly available and we give detailed instruction on how to easily implement the new method on top of an existing parallel SPH code.

Diehl, Steven [Los Alamos National Laboratory; Rockefeller, Gabriel M [Los Alamos National Laboratory; Fryer, Christopher L [Los Alamos National Laboratory

2008-01-01

362

Particle swarm optimization applied to automatic lens design

NASA Astrophysics Data System (ADS)

This paper describes a novel application of Particle Swarm Optimization (PSO) technique to lens design. A mathematical model is constructed, and merit functions in an optical system are employed as fitness functions, which combined radiuses of curvature, thicknesses among lens surfaces and refractive indices regarding an optical system. By using this function, the aberration correction is carried out. A design example using PSO is given. Results show that PSO as optical design tools is practical and powerful, and this method is no longer dependent on the lens initial structure and can arbitrarily create search ranges of structural parameters of a lens system, which is an important step towards automatic design with artificial intelligence.

Qin, Hua

2011-06-01

363

Optimization of nanoparticle core size for magnetic particle imaging

Magnetic Particle Imaging (MPI) is a powerful new diagnostic visualization platform designed for measuring the amount and location of superparamagnetic nanoscale molecular probes (NMPs) in biological tissues. Promising initial results indicate that MPI can be extremely sensitive and fast, with good spatial resolution for imaging human patients or live animals. Here, we present modeling results that show how MPI sensitivity and spatial resolution both depend on NMP-core physical properties, and how MPI performance can be effectively optimized through rational core design. Monodisperse magnetite cores are attractive since they are readily produced with a biocompatible coating and controllable size that facilitates quantitative imaging.

Ferguson, Matthew R.; Minard, Kevin R.; Krishnan, Kannan M.

2009-05-01

364

OBJECTIVES: Air pollution particulates have been identified as having adverse effects on respiratory health. The present study was undertaken to further clarify the effects of diesel exhaust on bronchoalveolar cells and soluble components in normal healthy subjects. The study was also designed to evaluate whether a ceramic particle trap at the end of the tail pipe, from an idling engine, would reduce indices of airway inflammation. METHODS: The study comprised three exposures in all 10 healthy never smoking subjects; air, diluted diesel exhaust, and diluted diesel exhaust filtered with a ceramic particle trap. The exposures were given for 1 hour in randomised order about 3 weeks apart. The diesel exhaust exposure apperatus has previously been carefully developed and evaluated. Bronchoalveolar lavage was performed 24 hours after exposures and the lavage fluids from the bronchial and bronchoalveolar region were analysed for cells and soluble components. RESULTS: The particle trap reduced the mean steady state number of particles by 50%, but the concentrations of the other measured compounds were almost unchanged. It was found that diesel exhaust caused an increase in neutrophils in airway lavage, together with an adverse influence on the phagocytosis by alveolar macrophages in vitro. Furthermore, the diesel exhaust was found to be able to induce a migration of alveolar macrophages into the airspaces, together with reduction in CD3+CD25+ cells. (CD = cluster of differentiation) The use of the specific ceramic particle trap at the end of the tail pipe was not sufficient to completely abolish these effects when interacting with the exhaust from an idling vehicle. CONCLUSIONS: The current study showed that exposure to diesel exhaust may induce neutrophil and alveolar macrophage recruitment into the airways and suppress alveolar macrophage function. The particle trap did not cause significant reduction of effects induced by diesel exhaust compared with unfiltered diesel exhaust. Further studies are warranted to evaluate more efficient treatment devices to reduce adverse reactions to diesel exhaust in the airways. PMID:10492649

Rudell, B.; Blomberg, A.; Helleday, R.; Ledin, M. C.; Lundback, B.; Stjernberg, N.; Horstedt, P.; Sandstrom, T.

1999-01-01

365

Sizing of particles in industrial processes is of great technical interest and therefore different physical-based techniques have been developed. The objective of this study was to review the characteristics of modern sizing instruments based on a modified fibre-optical spatial filtering technique (SFT). Fibre-optical spatial filtering velocimetry was modified by fibre-optical spot scanning in order to determine simultaneously the size and

Petrak Dieter; Dietrich Stefan; Eckardt Gnter; Khler Michael

2011-01-01

366

GPU-Based Asynchronous Global Optimization with Particle Swarm

NASA Astrophysics Data System (ADS)

The recent upsurge in research into general-purpose applications for graphics processing units (GPUs) has made low cost high-performance computing increasingly more accessible. Many global optimization algorithms that have previously benefited from parallel computation are now poised to take advantage of general-purpose GPU computing as well. In this paper, a global parallel asynchronous particle swarm optimization (PSO) approach is employed to solve three relatively complex, realistic parameter estimation problems in which each processor performs significant computation. Although PSO is readily parallelizable, memory bandwidth limitations with GPUs must be addressed, which is accomplished by minimizing communication among individual population members though asynchronous operations. The effect of asynchronous PSO on robustness and efficiency is assessed as a function of problem and population size. Experiments were performed with different population sizes on NVIDIA GPUs and on single-core CPUs. Results for successful trials exhibit marked speedup increases with the population size, indicating that more particles may be used to improve algorithm robustness while maintaining nearly constant time. This work also suggests that asynchronous operations on the GPU may be viable in stochastic population-based algorithms to increase efficiency without sacrificing the quality of the solutions.

Wachowiak, M. P.; Lambe Foster, A. E.

2012-10-01

367

A scatter learning particle swarm optimization algorithm for multimodal problems.

Particle swarm optimization (PSO) has been proved to be an effective tool for function optimization. Its performance depends heavily on the characteristics of the employed exemplars. This necessitates considering both the fitness and the distribution of exemplars in designing PSO algorithms. Following this idea, we propose a novel PSO variant, called scatter learning PSO algorithm (SLPSOA) for multimodal problems. SLPSOA contains some new algorithmic features while following the basic framework of PSO. It constructs an exemplar pool (EP) that is composed of a certain number of relatively high-quality solutions scattered in the solution space, and requires particles to select their exemplars from EP using the roulette wheel rule. By this means, more promising solution regions can be found. In addition, SLPSOA employs Solis and Wets' algorithm as a local searcher to enhance its fine search ability in the newfound solution regions. To verify the efficiency of the proposed algorithm, we test it on a set of 16 benchmark functions and compare it with six existing typical PSO algorithms. Computational results demonstrate that SLPSOA can prevent premature convergence and produce competitive solutions. PMID:24108491

Ren, Zhigang; Zhang, Aimin; Wen, Changyun; Feng, Zuren

2014-07-01

368

Particle Swarm Optimization in Comparison with Classical Optimization for GPS Network Design

NASA Astrophysics Data System (ADS)

The Global Positioning System (GPS) is increasingly coming into use to establish geodetic networks. In order to meet the established aims of a geodetic network, it has to be optimized, depending on design criteria. Optimization of a GPS network can be carried out by selecting baseline vectors from all of the probable baseline vectors that can be measured in a GPS network. Classically, a GPS network can be optimized using the trial and error method or analytical methods such as linear or nonlinear programming, or in some cases by generalized or iterative generalized inverses. Optimization problems may also be solved by intelligent optimization techniques such as Genetic Algorithms (GAs), Simulated Annealing (SA) and Particle Swarm Optimization (PSO) algorithms. The purpose of the present paper is to show how the PSO can be used to design a GPS network. Then, the efficiency and the applicability of this method are demonstrated with an example of GPS network which has been solved previously using a classical method. Our example shows that the PSO is effective, improving efficiency by 19.2% over the classical method.

Doma, M. I.

2013-12-01

369

Particle Swarm Optimization with Scale-Free Interactions

The particle swarm optimization (PSO) algorithm, in which individuals collaborate with their interacted neighbors like bird flocking to search for the optima, has been successfully applied in a wide range of fields pertaining to searching and convergence. Here we employ the scale-free network to represent the inter-individual interactions in the population, named SF-PSO. In contrast to the traditional PSO with fully-connected topology or regular topology, the scale-free topology used in SF-PSO incorporates the diversity of individuals in searching and information dissemination ability, leading to a quite different optimization process. Systematic results with respect to several standard test functions demonstrate that SF-PSO gives rise to a better balance between the convergence speed and the optimum quality, accounting for its much better performance than that of the traditional PSO algorithms. We further explore the dynamical searching process microscopically, finding that the cooperation of hub nodes and non-hub nodes play a crucial role in optimizing the convergence process. Our work may have implications in computational intelligence and complex networks. PMID:24859007

Liu, Chen; Du, Wen-Bo; Wang, Wen-Xu

2014-01-01

370

Particle swarm optimization with scale-free interactions.

The particle swarm optimization (PSO) algorithm, in which individuals collaborate with their interacted neighbors like bird flocking to search for the optima, has been successfully applied in a wide range of fields pertaining to searching and convergence. Here we employ the scale-free network to represent the inter-individual interactions in the population, named SF-PSO. In contrast to the traditional PSO with fully-connected topology or regular topology, the scale-free topology used in SF-PSO incorporates the diversity of individuals in searching and information dissemination ability, leading to a quite different optimization process. Systematic results with respect to several standard test functions demonstrate that SF-PSO gives rise to a better balance between the convergence speed and the optimum quality, accounting for its much better performance than that of the traditional PSO algorithms. We further explore the dynamical searching process microscopically, finding that the cooperation of hub nodes and non-hub nodes play a crucial role in optimizing the convergence process. Our work may have implications in computational intelligence and complex networks. PMID:24859007

Liu, Chen; Du, Wen-Bo; Wang, Wen-Xu

2014-01-01

371

An iterative particle filter approach for respiratory motion estimation in nuclear medicine imaging

NASA Astrophysics Data System (ADS)

The continual improvement in spatial resolution of Nuclear Medicine (NM) scanners has made accurate compensation of patient motion increasingly important. A major source of corrupting motion in NM acquisition is due to respiration. Therefore a particle filter (PF) approach has been proposed as a powerful method for motion correction in NM. The probabilistic view of the system in the PF is seen as an advantage that considers the complexity and uncertainties in estimating respiratory motion. Previous tests using XCAT has shown the possibility of estimating unseen organ configuration using training data that only consist of a single respiratory cycle. This paper augments application specific adaptation methods that have been implemented for better PF estimates with an iterative model update step. Results show that errors are further reduced to an extent up to a small number of iterations and such improvements will be advantageous for the PF to cope with more realistic and complex applications.

Abd. Rahni, Ashrani Aizzuddin; Wells, Kevin; Lewis, Emma; Guy, Matthew; Goswami, Budhaditya

2011-03-01

372

In this paper a method to extract cerebral arterial segments from CT angiography (CTA) is proposed. The segmentation of cerebral arteries in CTA is a challenging task mainly due to bone contact and vein contamination. The proposed method considers a vessel segment as an ellipse travelling in three-dimensional (3D) space and segments it out by tracking the ellipse in spatial sequence. A particle filter is employed as the main framework for tracking and is equipped with adaptive properties to both bone contact and vein contamination. The proposed tracking method is evaluated by the experiments on both synthetic and actual data. A variety of vessels were synthesized to assess the sensitivity to the axis curvature change, obscure boundaries, and noise. The experimental results showed that the proposed method is also insensitive to parameter settings and requires less user intervention than the conventional vessel tracking methods, which proves its improved robustness. PMID:17045696

Shim, Hackjoon; Kwon, Dongjin; Yun, Il Dong; Lee, Sang Uk

2006-12-01

373

Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

NASA Technical Reports Server (NTRS)

An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

Simon, Donald L.; Garg, Sanjay

2011-01-01

374

An Accelerated Particle Swarm Optimization Algorithm on Parametric Optimization of WEDM of Die-Steel

NASA Astrophysics Data System (ADS)

This study employed Accelerated Particle Swarm Optimization (APSO) algorithm to optimize the machining parameters that lead to a maximum Material Removal Rate (MRR), minimum surface roughness and minimum kerf width values for Wire Electrical Discharge Machining (WEDM) of AISI D3 die-steel. Four machining parameters that are optimized using APSO algorithm include Pulse on-time, Pulse off-time, Gap voltage, Wire feed. The machining parameters are evaluated by Taguchi's L9 Orthogonal Array (OA). Experiments are conducted on a CNC WEDM and output responses such as material removal rate, surface roughness and kerf width are determined. The empirical relationship between control factors and output responses are established by using linear regression models using Minitab software. Finally, APSO algorithm, a nature inspired metaheuristic technique, is used to optimize the WEDM machining parameters for higher material removal rate and lower kerf width with surface roughness as constraint. The confirmation experiments carried out with the optimum conditions show that the proposed algorithm was found to be potential in finding numerous optimal input machining parameters which can fulfill wide requirements of a process engineer working in WEDM industry.

Muthukumar, V.; Suresh Babu, A.; Venkatasamy, R.; Senthil Kumar, N.

2015-01-01

375

An Accelerated Particle Swarm Optimization Algorithm on Parametric Optimization of WEDM of Die-Steel

NASA Astrophysics Data System (ADS)

This study employed Accelerated Particle Swarm Optimization (APSO) algorithm to optimize the machining parameters that lead to a maximum Material Removal Rate (MRR), minimum surface roughness and minimum kerf width values for Wire Electrical Discharge Machining (WEDM) of AISI D3 die-steel. Four machining parameters that are optimized using APSO algorithm include Pulse on-time, Pulse off-time, Gap voltage, Wire feed. The machining parameters are evaluated by Taguchi's L9 Orthogonal Array (OA). Experiments are conducted on a CNC WEDM and output responses such as material removal rate, surface roughness and kerf width are determined. The empirical relationship between control factors and output responses are established by using linear regression models using Minitab software. Finally, APSO algorithm, a nature inspired metaheuristic technique, is used to optimize the WEDM machining parameters for higher material removal rate and lower kerf width with surface roughness as constraint. The confirmation experiments carried out with the optimum conditions show that the proposed algorithm was found to be potential in finding numerous optimal input machining parameters which can fulfill wide requirements of a process engineer working in WEDM industry.

Muthukumar, V.; Suresh Babu, A.; Venkatasamy, R.; Senthil Kumar, N.

2014-12-01

376

NASA Astrophysics Data System (ADS)

We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.

Masuda, Kazuaki; Aiyoshi, Eitaro

377

Industrial aquaculture wastewater contains large quantities of suspended particles that can be easily broken down physically. Introduction of macro-bio-filters, such as bivalve filter feeders, may offer the potential for treatment of fine suspended matter in industrial aquaculture wastewater. In this study, we employed two kinds of bivalve filter feeders, the Pacific oyster Crassostrea gigas and the blue mussel Mytilus galloprovincialis, to deposit suspended solids from marine fish aquaculture wastewater in flow-through systems. Results showed that the biodeposition rate of suspended particles by C. gigas (shell height: 8.670.99 cm) and M. galloprovincialis (shell height: 4.430.98 cm) was 77.847.77 and 6.370.67 mg ind?1d?1, respectively. The total solid suspension (TSS) deposition rates of oyster and mussel treatments were 3.730.27 and 2.760.20 times higher than that of the control treatment without bivalves, respectively. The TSS deposition rates of bivalve treatments were significantly higher than the natural sedimentation rate of the control treatment (P<0.001). Furthermore, organic matter and C, N in the sediments of bivalve treatments were significantly lower than those in the sediments of the control (P<0.05). It was suggested that the filter feeders C. gigas and M. galloprovincialis had considerable potential to filter and accelerate the deposition of suspended particles from industrial aquaculture wastewater, and simultaneously yield value-added biological products. PMID:25250730

Zhou, Yi; Zhang, Shaojun; Liu, Ying; Yang, Hongsheng

2014-01-01

378

Genetic algorithms (GA) have proven to be a useful method of optimization for difficult and discontinuous multidimensional engineering problems. A new method of optimization, particle swarm optimization (PSO), is able to accomplish the same goal as GA optimization in a new and faster way. The purpose of this paper is to investigate the foundations and performance of the two algorithms

Jacob Robinson; Seelig Sinton; Yahya Rahmat-Samii

2002-01-01

379

A computational, three-dimensional approach to investigate the behavior ofdiesel soot particles in the micro-channels ofa wall-flow, porous-ceramic particulate filter is presented. Particle size examined is in the PM2.5 range. The flow field is simulated with a finite- volume Navier-Stokes solver and the Ergun equation is used to model the porous material. The permeability coefficients were obtained by fitting experimental data.

Fabio Sbrizzai; Paolo Faraldi; Alfredo Soldati

380

Multi-dipole EEG source localization using particle swarm optimization.

The multi-dipole EEG source localization problem is (usually) highly nonlinear with a non-convex cost function. Moreover, the gray matter tissue is located in several disjunct regions in the head which leads to a non-continuous solution space. For solving this problem an efficient algorithm which can handle multi-source activities is needed. In this paper, a modified particle swarm optimization (MPSO) method is proposed to solve the multi-dipole EEG source localization. The method is tested on synthetic EEG signals generated from two strong active sources and a noisy background source. The results show that using the new method is a reliable choice when we deal with a strong multi-active source scenario, in which a single dipole source localization may fail. PMID:24111195

Shirvany, Yazdan; Edelvik, Fredrik; Persson, Mikael

2013-01-01

381

Optimization of nanoparticle core size for magnetic particle imaging

Magnetic particle imaging (MPI) is a powerful new research and diagnostic imaging platform that is designed to image the amount and location of superparamagnetic nanoparticles in biological tissue. Here, we present mathematical modeling results that show how MPI sensitivity and spatial resolution both depend on the size of the nanoparticle core and its other physical properties, and how imaging performance can be effectively optimized through rational core design. Modeling is performed using the properties of magnetite cores, since these are readily produced with a controllable size that facilitates quantitative imaging. Results show that very low detection thresholds (of a few nanograms Fe3O4) and sub-millimeter spatial resolution are possible with MPI. PMID:19606261

Ferguson, R. Matthew; Minard, Kevin R.; Krishnan, Kannan M.

2009-01-01

382

Human tracking in thermal images using adaptive particle filters with online random forest learning

NASA Astrophysics Data System (ADS)

This paper presents a fast and robust human tracking method to use in a moving long-wave infrared thermal camera under poor illumination with the existence of shadows and cluttered backgrounds. To improve the human tracking performance while minimizing the computation time, this study proposes an online learning of classifiers based on particle filters and combination of a local intensity distribution (LID) with oriented center-symmetric local binary patterns (OCS-LBP). Specifically, we design a real-time random forest (RF), which is the ensemble of decision trees for confidence estimation, and confidences of the RF are converted into a likelihood function of the target state. First, the target model is selected by the user and particles are sampled. Then, RFs are generated using the positive and negative examples with LID and OCS-LBP features by online learning. The learned RF classifiers are used to detect the most likely target position in the subsequent frame in the next stage. Then, the RFs are learned again by means of fast retraining with the tracked object and background appearance in the new frame. The proposed algorithm is successfully applied to various thermal videos as tests and its tracking performance is better than those of other methods.

Ko, Byoung Chul; Kwak, Joon-Young; Nam, Jae-Yeal

2013-11-01

383

The absorptivity and imaginary index of refraction for carbon and methylene blue particles were inferred from the photoacoustic spectra of samples collected on Teflon filter substrates. Three models of varying complexity were developed to describe the photoacoustic signal as a fu...

384

and Particle Filtering Algorithms for Tracking a Frequency-Hopped Signal Alexandros Valyrakis, Efthimios E of tracking a frequency-hopped signal without knowledge of its hopping pattern is considered. The problem is of interest in military communications, where, in addition to frequency, hop timing can also be randomly

Sidiropoulos, Nikolaos D.

385

Microwave-based medical diagnosis using particle swarm optimization algorithm

NASA Astrophysics Data System (ADS)

This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.

Modiri, Arezoo

386

Particle swarm optimization (PSO) and differential evolution (DE) are both efficient and powerful population-based stochastic search techniques for solving optimization problems, which have been widely applied in many scientific and engineering fields. Unfortunately, both of them can easily fly into local optima and lack the ability of jumping out of local optima. A novel adaptive hybrid algorithm based on PSO and DE (HPSO-DE) is formulated by developing a balanced parameter between PSO and DE. Adaptive mutation is carried out on current population when the population clusters around local optima. The HPSO-DE enjoys the advantages of PSO and DE and maintains diversity of the population. Compared with PSO, DE, and their variants, the performance of HPSO-DE is competitive. The balanced parameter sensitivity is discussed in detail. PMID:24688370

Yu, Xiaobing; Cao, Jie; Shan, Haiyan; Zhu, Li; Guo, Jun

2014-01-01

387

A Bayesian Interpretation of the Particle Swarm Optimization and Its Kernel Extension

Particle swarm optimization is a popular method for solving difficult optimization problems. There have been attempts to formulate the method in formal probabilistic or stochastic terms (e.g. bare bones particle swarm) with the aim to achieve more generality and explain the practical behavior of the method. Here we present a Bayesian interpretation of the particle swarm optimization. This interpretation provides a formal framework for incorporation of prior knowledge about the problem that is being solved. Furthermore, it also allows to extend the particle optimization method through the use of kernel functions that represent the intermediary transformation of the data into a different space where the optimization problem is expected to be easier to be resolvedsuch transformation can be seen as a form of prior knowledge about the nature of the optimization problem. We derive from the general Bayesian formulation the commonly used particle swarm methods as particular cases. PMID:23144937

Andras, Peter

2012-01-01

388

A methodology for optimum sampling frequency selection for wavelet feature extraction is presented. We show that classification accuracy is enhanced by adequately selecting the parameters: number of decomposition levels, wavelet function and sampling rate. A novel approach for selecting the parameters based on particle swarm optimization (PSO) is presented. Experimental results conducted on two different datasets with support vector machine (SVM) classifiers confirm the superiority and advantages of the proposed method. It is shown empirically that the proposed method outperforms significantly the existing methods in terms of accuracy rate. PMID:24109857

Guarnizo, C; Orozco, A A; Alvarez, M A

2013-01-01

389

NASA Astrophysics Data System (ADS)

Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

2012-05-01

390

Iterative Informed Audio Data Hiding Scheme Using Optimal Filter Alejandro LoboGuerrero, Patrick.bas@lis.inpg.fr,joel.lienard@lis.inpg.fr Abstract- Audio watermarking is a method that allows the insertion of an imperceptible mark on an audio, the audio data set represents the host that supports the embedded information and it is considered as "noise

Paris-Sud XI, Université de

391

NASA Astrophysics Data System (ADS)

Advanced 3D optical and laser scanners can generate mesh models with high-resolution details, while inevitably introducing noises from various sources and mesh irregularity due to inconsistent sampling. Noises and irregularity of a scanned model prohibit its use in practical applications where high quality models are required. However, optimizing a noisy mesh while preserving its geometric features is a challenging task. We present a robust two-step approach to meet the challenges of noisy mesh optimization. In the first step, we propose a joint bilateral filter to remove noises on a mesh while maintaining its volume and preserving its features. In the second step, we develop a constrained Laplacian smoothing scheme by adding two kinds of constraints into the original Laplacian equation. As most noises have been removed in the first step, we can easily detect feature edges from the model and add them as constraints in the Laplacian smoothing. As a result, the constrained scheme can simultaneously preserve sharp features and avoid volume shrinkage during mesh smoothing. By integrating these two steps, our approach can effectively remove noises, maintain features, improve regularity for a noisy mesh, as well as avoid side-effects such as volume shrinkage. Extensive qualitative and quantitative experiments have been performed on meshes with synthetic and raw noises to demonstrate the feasibility and effectiveness of our approach.

Wei, Mingqiang; Shen, Wuyao; Qin, Jing; Wu, Jianhuang; Wong, Tien-Tsin; Heng, Pheng-Ann

2013-11-01

392

MODIS SCA assimilation with the particle filter for improving discharge simulation

NASA Astrophysics Data System (ADS)

LISFLOOD is a distributed, semi-physical rainfall-runoff model designed for the simulation of hydrological processes in medium to large scale river basins. This model is used at the European Commission Joint Research Centre for studying floods, global hydrological changes and droughts. LISFLOOD is the basis of the European Flood Alert System (EFAS), which is a real-time probabilistic flood prediction system with a lead-time of up to 10 days. The aim of this study is to evaluate the feasibility of assimilation of satellite snow data into LISFLOOD. Furthermore, the impact of the assimilation on the snow simulation as well as on discharge will be assessed. For this purpose, MODIS Snow Cover Area (SCA) has been used here. Since cloud coverage limits the availability of MODIS data, we implemented methods for improving the data set, such as - combination of the data from the two MODIS satellites - merging data from previous days - extrapolate data from neighboring pixels - extrapolate data from pixels with similar altitudes. The data provided by the MODIS satellites is SCA, i.e. presence or not of snow, whereas the LISFLOOD model simulates Snow Water Equivalent (SWE). For the conversion from SWE to SCA we employed a snow depletion curve. The assimilation method used is the particle filter. This method is based on multiple perturbed simulations of the model, which at each assimilation time step are either kept or removed based on the similarity between the modeled SCA and the observed SCA (i.e., MODIS data). One major advantage of the particle filter as applied here is, that model states are not modified directly and hence the model conserves the mass balance throughout the assimilation. Tests have been performed on synthetic data (normal LISFLOOD SCA used as observations) on a small basin (1-dimensional problem) and on a larger basin (7-dimensional problem), both located in the Czech Morava River basin. These experiments showed the positive performance of the assimilation for improving SCA and model discharges. The impact of the observation error used has been assessed, as well as the impact of the frequency of assimilation (from 1 to 7 days). Finally, tests of assimilation of actual MODIS SCA data have been performed on the small and on the large basin (including the same tests on frequency of assimilation and on observations error). We showed that SCA was improved for all cases, but that discharges were not necessarily improved for high assimilation frequencies or significantly large observation errors. Increasing the dimension of the problem (from 1 to 7) deteriorates the performance of the assimilation system.

Thirel, G.; Salamon, P.; Burek, P.; Kalas, M.

2012-04-01

393

Two sub-swarms evolutionary particle swarm optimization based on team progress learning

The sociological background of particle swarm optimization is analyzed, and for preventing from premature convergence, a modified approach to enhance organizational management mechanism in group is proposed. Two sub-swarms evolutionary particle swarm optimization based on team progress learning is proposed by borrowing ideas from social division and progress learning ideas in management team. The team members are divided into the

Shan-he Jiang; Ri-dong Zhang; Qi-shen Wang

2010-01-01

394

Fishing for Data: Using Particle Swarm Optimization to Search Data

NASA Astrophysics Data System (ADS)

As the size of data and model sets continue to increase, more efficient ways are needed to sift through the available information. We present a computational method which will efficiently search large parameter spaces to either map the space or find individual data/models of interest. Particle swarm optimization (PSO) is a subclass of artificial life computer algorithms. The PSO algorithm attempts to leverage "swarm intelligence against finding optimal solutions to a problem. This system is often based on a biological model of a swarm (e.g. schooling fish). These biological models are broken down into a few simple rules which govern the behavior of the system. "Agents (e.g. fish) are introduced and the agents, following the rules, search out solutions much like a fish would seek out food. We have made extensive modifications to the standard PSO model which increase its efficiency as-well-as adding the capacity to map a parameter space and find multiple solutions. Our modified PSO is ideally suited to search and map large sets of data/models which are degenerate or to search through data/models which are too numerous to analyze by hand. One example of this would include radiative transfer models, which are inherently degenerate. Applying the PSO algorithm will allow the degeneracy space to be mapped and thus better determine limits on dust shell parameters. Another example is searching through legacy data from a survey for hints of Polycyclic Aromatic Hydrocarbon emission. What might have once taken years of searching (and many frustrated graduate students) can now be relegated to the task of a computer which will work day and night for only the cost of electricity. We hope this algorithm will allow fellow astronomers to more efficiently search data and models, thereby freeing them to focus on the physics of the Universe.

Caputo, Daniel P.; Dolan, R.

2010-01-01

395

Usefulness of Nonlinear Interpolation and Particle Filter in Zigbee Indoor Positioning

NASA Astrophysics Data System (ADS)

The key to fingerprint positioning algorithm is establishing effective fingerprint information database based on different reference nodes of received signal strength indicator (RSSI). Traditional method is to set the location area calibration multiple information sampling points, and collection of a large number sample data what is very time consuming. With Zigbee sensor networks as platform, considering the influence of positioning signal interference, we proposed an improved algorithm of getting virtual database based on polynomial interpolation, while the pre-estimated result was disposed by particle filter. Experimental result shows that this method can generate a quick, simple fine-grained localization information database, and improve the positioning accuracy at the same time. Kluczem do algorytmu pozycjonowania wykorzystuj?cego metod? fi ngerprinting jest ustanowienie skutecznej bazy danych na podstawie informacji z radiowych nadajnikw referencyjnych przy wykorzystaniu wska?nika mocy odbieranego sygna?u (RSSI). Tradycyjna metoda oparta jest na przeprowadzeniu kalibracji obszaru lokalizacji na podstawie wielu punktw pomiarowych i otrzymaniu du?ej liczby prbek, co jest bardzo czasoch?onne.

Zhang, Xiang; Wu, Helei; Uradzi?ski, Marcin

2014-12-01

396

IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING

This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches. PMID:21132112

Bayard, David S.; Schumitzky, Alan

2009-01-01

397

OPTIMIZATION OF COAL PARTICLE FLOW PATTERNS IN LOW NOX BURNERS

It is well understood that the stability of axial diffusion flames is dependent on the mixing behavior of the fuel and combustion air streams. Combustion aerodynamic texts typically describe flame stability and transitions from laminar diffusion flames to fully developed turbulent flames as a function of increasing jet velocity. Turbulent diffusion flame stability is greatly influenced by recirculation eddies that transport hot combustion gases back to the burner nozzle. This recirculation enhances mixing and heats the incoming gas streams. Models describing these recirculation eddies utilize conservation of momentum and mass assumptions. Increasing the mass flow rate of either fuel or combustion air increases both the jet velocity and momentum for a fixed burner configuration. Thus, differentiating between gas velocity and momentum is important when evaluating flame stability under various operating conditions. The research efforts described herein are part of an ongoing project directed at evaluating the effect of flame aerodynamics on NO{sub x} emissions from coal fired burners in a systematic manner. This research includes both experimental and modeling efforts being performed at the University of Arizona in collaboration with Purdue University. The objective of this effort is to develop rational design tools for optimizing low NO{sub x} burners. Experimental studies include both cold-and hot-flow evaluations of the following parameters: primary and secondary inlet air velocity, coal concentration in the primary air, coal particle size distribution and flame holder geometry. Hot-flow experiments will also evaluate the effect of wall temperature on burner performance.

Jost O.L. Wendt; Gregory E. Ogden; Jennifer Sinclair; Stephanus Budilarto

2001-09-04

398

Transformer fault prediction based on particle swarm optimization and SVM

NASA Astrophysics Data System (ADS)

Forecasting of dissolved gases content in power transformer oil is very significant to detect incipient failures of transformer early and ensure normal operation of entire power system. Forecasting of dissolved gases content in power transformer oil is a complicated problem due to its nonlinearity and the small quantity of training data. Support vector machine (SVM) has been successfully employed to solve regression problem of nonlinearity and small sample. However, it is different to choice the best parameters of the SVM ,In this study, support vector machine is proposed to forecast dissolved gases content in power transformer oil, among which Particle Swarm Optimization (PSO) are used to determine free parameters of support vector machine. The experimental data from the electric power company in Sichuan are used to illustrate the performance of proposed PSO-SVM model. The experimental results indicate that the proposed PSO-SVM model can achieve greater forecasting accuracy than grey model (GM) under the circumstances of small sample. Consequently, the PSO-SVM model is a proper alternative for forecasting dissolved gases content in power transformer oil.

Zhang, Yan; Zhang, Bide; Pei, Zichun; Wang, Yan

2011-06-01

399

Operon Prediction using Chaos Embedded Particle Swarm Optimization.

Operons contain valuable information for drug design and determining protein functions. Genes within an operon are co-transcribed to a single-strand mRNA and must be co-regulated. The identification of operons is thus critical for a detailed understanding of the gene regulations. However, currently used experimental methods for operon detection are generally difficult to implement and time-consuming. In this paper, we propose a chaotic binary particle swarm optimization (CBPSO) to predict operons in bacterial genomes. The intergenic distance, participation in the same metabolic pathway and the cluster of orthologous groups (COG) properties of the Escherichia coli genome are used to design a fitness function. Furthermore, the Bacillus subtilis, Pseudomonas aeruginosa PA01, Staphylococcus aureus and Mycobacterium tuberculosis genomes are tested and evaluated for accuracy, sensitivity, and specificity. The computational results indicate that the proposed method works effectively in terms of enhancing the performance of the operon prediction. The proposed method also achieved a good balance between sensitivity and specificity when compared to methods from the literature. PMID:23713004

Chuang, Li-Yeh; Yang, Cheng-Huei; Tsai, Jui-Hung; Yang, Cheng-Hong

2013-05-20

400

Operon prediction using chaos embedded particle swarm optimization.

Operons contain valuable information for drug design and determining protein functions. Genes within an operon are co-transcribed to a single-strand mRNA and must be coregulated. The identification of operons is, thus, critical for a detailed understanding of the gene regulations. However, currently used experimental methods for operon detection are generally difficult to implement and time consuming. In this paper, we propose a chaotic binary particle swarm optimization (CBPSO) to predict operons in bacterial genomes. The intergenic distance, participation in the same metabolic pathway and the cluster of orthologous groups (COG) properties of the Escherichia coli genome are used to design a fitness function. Furthermore, the Bacillus subtilis, Pseudomonas aeruginosa PA01, Staphylococcus aureus and Mycobacterium tuberculosis genomes are tested and evaluated for accuracy, sensitivity, and specificity. The computational results indicate that the proposed method works effectively in terms of enhancing the performance of the operon prediction. The proposed method also achieved a good balance between sensitivity and specificity when compared to methods from the literature. PMID:24384714

Chuang, Li-Yeh; Yang, Cheng-Huei; Tsai, Jui-Hung; Yang, Cheng-Hong

2013-01-01

401

Hybrid Particle Swarm Optimization for Vehicle Routing Problem with Reverse Logistics

The vehicle routing problem (VRP) is a well-known combinatorial optimization problem, holds a central place in logistics management. This paper proposes an hybrid particle swarm optimization (PSO) for VRP with reverse logistics, which possesses a new strategy to represent the solution of the problem, and in the evolution of PSO, SA algorithm is used to optimize the sequence of the

Yang Peng

2009-01-01

402

A particle swarm optimization approach for optimum design of PID controller in AVR system

In this paper, a novel design method for determining the optimal proportional-integral-derivative (PID) controller parameters of an AVR system using the particle swarm optimization (PSO) algorithm is presented. This paper demonstrated in detail how to employ the PSO method to search efficiently the optimal PID controller parameters of an AVR system. The proposed approach had superior features, including easy implementation,

Zwe-Lee Gaing

2004-01-01

403

Collision-Free Path Planning for Mobile Robots Using Chaotic Particle Swarm Optimization

\\u000a Path planning for mobile robots is an important topic in modern robotics studies. This paper proposes a new approach to collision-free\\u000a path planning problem for mobile robots using the particle swarm optimization combined with chaos iterations. The particle\\u000a swarm optimization algorithm is run to get the global best particle as the candidate solution, and then local chaotic search\\u000a iterations are

Qiang Zhao; Shaoze Yan

2005-01-01

404

Hybrid Kalman\\/H?filter in designing optimal navigation of vehicle in PRT System

PRT( Personal Rapid Transit ) system is a automated operation, so that it is important exactly finding position of vehicle. Many of PRT system has accepted the GPS system for a position, speed, and direction. in this paper, we propose a combination of Kalman Filter and H? Filter known as Hybrid Kalman\\/ H? Filter for applying to GPS navigation algorithm.

Hyunsoo Kim; Hoang Hieu Nguyen; Phi Long Nguyen; Han Sil Kim; Young Hwan Jang; Myungseon Ryu; Changho Choi

2007-01-01

405

A field-aged, passive diesel particulate filter (DPF) employed in a school bus retrofit program was evaluated for emissions of particle mass and number concentration before, during and after regeneration. For the particle mass measurements, filter samples were collected for gravimetric analysis with a partial flow sampling system, which sampled proportionally to the exhaust flow. Total number concentration and number-size distributions were measured by a condensation particle counter and scanning mobility particle sizer, respectively. The results of the evaluation show that the number concentration emissions decreased as the DPF became loaded with soot. However after soot removal by regeneration, the number concentration emissions were approximately 20 times greater, which suggests the importance of the soot layer in helping to trap particles. Contrary to the number concentration results, particle mass emissions decreased from 6 1 mg/hp-hr before regeneration to 3 2 mg/hp-hr after regeneration. This indicates that nanoparticles with diameter less than 50 nm may have been emitted after regeneration since these particles contribute little to the total mass. Overall, average particle emission reductions of 95% by mass and 10,000-fold by number concentration after four years of use provided evidence of the durability of a field-aged DPF. In contrast to previous reports for new DPFs in which elevated number concentrations occurred during the first 200 seconds of a transient cycle, the number concentration emissions were elevated during the second half of the heavy-duty federal test procedure when high speed was sustained. This information is relevant for the analysis of mechanisms by which particles are emitted from field-aged DPFs.

Barone, Teresa L [ORNL; Storey, John Morse [ORNL; Domingo, Norberto [ORNL

2010-01-01

406

Design and Optimization of Dual Band Microstrip Antenna Using Particle Swarm Optimization Technique

NASA Astrophysics Data System (ADS)

Dual-frequency operation of antenna has become a necessity for many applications in recent wireless communication systems, such as GPS, GSM services each operating at two different frequency bands. A new technique to achieve dual band operation from different types of microstrip antennas is presented here. An evolutionary design process using a particle swarm optimization (PSO) algorithm in conjunction with the method of moments (MoM) is employed effectively to obtain the geometric parameters of the antenna performance. In this article a PSO based on IE3D method is used to design dual band inset feed microstrip antenna. Maximum return loss is obtained at 2.4 GHz is -43.95 dB and at 3.08 GHz is -27.4 dB. Its bandwidth, of 33.54 MHz, ranges from 2.38355 GHz to 2.41709 GHz. Simulated and experimental results of the antenna are discussed.

Behera, Santanu Kumar; Choukiker, Y.

2010-11-01

407

Design optimization of pin fin geometry using particle swarm optimization algorithm.

Particle swarm optimization (PSO) is employed to investigate the overall performance of a pin fin.The following study will examine the effect of governing parameters on overall thermal/fluid performance associated with different fin geometries, including, rectangular plate fins as well as square, circular, and elliptical pin fins. The idea of entropy generation minimization, EGM is employed to combine the effects of thermal resistance and pressure drop within the heat sink. A general dimensionless expression for the entropy generation rate is obtained by considering a control volume around the pin fin including base plate and applying the conservations equations for mass and energy with the entropy balance. Selected fin geometries are examined for the heat transfer, fluid friction, and the minimum entropy generation rate corresponding to different parameters including axis ratio, aspect ratio, and Reynolds number. The results clearly indicate that the preferred fin profile is very dependent on these parameters. PMID:23741525

Hamadneh, Nawaf; Khan, Waqar A; Sathasivam, Saratha; Ong, Hong Choon

2013-01-01

408

An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

NASA Technical Reports Server (NTRS)

A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

Litt, Jonathan S.

2005-01-01

409

An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

NASA Technical Reports Server (NTRS)

A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

Litt, Jonathan S.

2007-01-01

410

An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

NASA Technical Reports Server (NTRS)

A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

Litt, Jonathan S.

2007-01-01

411

Matched filter optimization of kSZ measurements with a reconstructed cosmological flow field

NASA Astrophysics Data System (ADS)

We develop and test a new statistical method to measure the kinematic Sunyaev-Zel'dovich (kSZ) effect. A sample of independently detected clusters is combined with the cosmic flow field predicted from a galaxy redshift survey in order to derive a matched filter that optimally weights the kSZ signal for the sample as a whole given the noise involved in the problem. We apply this formalism to realistic mock microwave skies based on cosmological N-body simulations, and demonstrate its robustness and performance. In particular, we carefully assess the various sources of uncertainty, cosmic microwave background primary fluctuations, instrumental noise, uncertainties in the determination of the velocity field, and effects introduced by miscentring of clusters and by uncertainties of the mass-observable relation (normalization and scatter). We show that available data (Planck maps and the MaxBCG catalogue) should deliver a 7.7? detection of the kSZ. A similar cluster catalogue with broader sky coverage should increase the detection significance to 13?. We point out that such measurements could be binned in order to study the properties of the cosmic gas and velocity fields, or combined into a single measurement to constrain cosmological parameters or deviations of the law of gravity from General Relativity.

Li, Ming; Angulo, R. E.; White, S. D. M.; Jasche, J.

2014-09-01

412

Selection of plants for optimization of vegetative filter strips treating runoff from turfgrass.

Runoff from turf environments, such as golf courses, is of increasing concern due to the associated chemical contamination of lakes, reservoirs, rivers, and ground water. Pesticide runoff due to fungicides, herbicides, and insecticides used to maintain golf courses in acceptable playing condition is a particular concern. One possible approach to mitigate such contamination is through the implementation of effective vegetative filter strips (VFS) on golf courses and other recreational turf environments. The objective of the current study was to screen ten aesthetically acceptable plant species for their ability to remove four commonly-used and degradable pesticides: chlorpyrifos (CP), chlorothalonil (CT), pendimethalin (PE), and propiconazole (PR) from soil in a greenhouse setting, thus providing invaluable information as to the species composition that would be most efficacious for use in VFS surrounding turf environments. Our results revealed that blue flag iris (Iris versicolor) (76% CP, 94% CT, 48% PE, and 33% PR were lost from soil after 3 mo of plant growth), eastern gama grass (Tripsacum dactyloides) (47% CP, 95% CT, 17% PE, and 22% PR were lost from soil after 3 mo of plant growth), and big blue stem (Andropogon gerardii) (52% CP, 91% CT, 19% PE, and 30% PR were lost from soil after 3 mo of plant growth) were excellent candidates for the optimization of VFS as buffer zones abutting turf environments. Blue flag iris was most effective at removing selected pesticides from soil and had the highest aesthetic value of the plants tested. PMID:18689747

Smith, Katy E; Putnam, Raymond A; Phaneuf, Clifford; Lanza, Guy R; Dhankher, Om P; Clark, John M

2008-01-01

413

Industrial-scale filter dryers, equipped with one or more microwave input ports, have been modelled with the aim of detecting existing criticalities, proposing possible solutions and optimizing the overall system efficiency and treatment homogeneity. Three different loading conditions have been simulated, namely the empty applicator, the applicator partially loaded by both a high-loss and low loss load whose dielectric properties correspond to the one measured on real products. Modeling results allowed for the implementation of improvements to the original design such as the insertion of a wave guide transition and a properly designed pressure window, modification of the microwave inlet's position and orientation, alteration of the nozzles' geometry and distribution, and changing of the cleaning metallic torus dimensions and position. Experimental testing on representative loads, as well as in production sites, allowed for the confirmation of the validity of the implemented improvements, thus showing how numerical simulation can assist the designer in removing critical features and improving equipment performances when moving from conventional heating to hybrid microwave-assisted processing. PMID:18350999

Leonelli, Cristina; Veronesi, Paolo; Grisoni, Fabio

2007-01-01

414

NASA Astrophysics Data System (ADS)

We propose a fast multiscale face detector that boosts a set of SVM-based hierarchy classifiers constructed with two heterogeneous features, i.e. Multi-block Local Binary Patterns (MB-LBP) and Speeded Up Robust Features (SURF), at different image resolutions. In this hierarchical architecture, simple and fast classifiers using efficient MB-LBP descriptors remove large parts of the background in low and intermediate scale layers, thus only a small percentage of background patches look similar to faces and require a more accurate but slower classifier that uses distinctive SURF descriptor to avoid false classifications in the finest scale. By propagating only those patterns that are not classified as background, we can quickly decrease the amount of data need to be processed. To lessen the training burden of the hierarchy classifier, in each scale layer, a feature selection scheme using Binary Particle Swarm Optimization (BPSO) searches the entire feature space and filters out the minimum number of discriminative features that give the highest classification rate on a validation set, then these selected distinctive features are fed into the SVM classifier. We compared detection performance of the proposed face detector with other state-of-the-art methods on the CMU+MIT face dataset. Our detector achieves the best overall detection performance. The training time of our algorithm is 60 times faster than the standard Adaboost algorithm. It takes about 70 ms for our face detector to process a 320240 image, which is comparable to Viola and Jones' detector.

Pan, Hong; Xia, Si-Yu; Jin, Li-Zuo; Xia, Liang-Zheng

2011-12-01

415

The extent of mass loss on Teflon filters caused by ammonium nitrate volatilization can be a substantial fraction of the measured particulate matter with an aerodynamic diameter less than 2.5 microm (PM2.5) or 10 microm (PM10) mass and depends on where and when it was collected. There is no straightforward method to correct for the mass loss using routine monitoring data. In southern California during the California Acid Deposition Monitoring Program, 30-40% of the gravimetric PM2.5 mass was lost during summer daytime. Lower mass losses occurred at more remote locations. The estimated potential mass loss in the Interagency Monitoring of Protected Visual Environments network was consistent with the measured loss observed in California. The biased mass measurement implies that use of Federal Reference Method data for fine particles may lead to control strategies that are biased toward sources of fugitive dust, other primary particle emission sources, and stable secondary particles (e.g., sulfates). This analysis clearly supports the need for speciated analysis of samples collected in a manner that preserves volatile species. Finally, although there is loss of volatile nitrate (NO3-) from Teflon filters during sampling, the NO3- remaining after collection is quite stable. We found little loss of NO3- from Teflon filters after 2 hr under vacuum and 1 min of heating by a cyclotron proton beam. PMID:14871017

Ashbaugh, Lowell L; Eldred, Robert A

2004-01-01

416

NASA Astrophysics Data System (ADS)

A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.

Zhuang, Yufei; Huang, Haibin

2014-02-01

417

Tribal particle swarm optimization for neurofuzzy inference systems and its prediction applications

NASA Astrophysics Data System (ADS)

This study presents tribal particle swarm optimization (TPSO) to optimize the parameters of the functional-link-based neurofuzzy inference system (FLNIS) for prediction applications. The proposed TPSO uses particle swarm optimization (PSO) as evolution strategies of the tribes optimization algorithm (TOA) to balance local and global exploration of the search space. The proposed TPSO uses a self-clustering algorithm to divide the particle swarm into multiple tribes, and selects suitable evolution strategies to update each particle. The TPSO also uses a tribal adaptation mechanism to remove and generate particles and reconstruct tribal links. The tribal adaptation mechanism can improve the qualities of the tribe and the tribe adaptation. Finally, the FLNIS model with the proposed TPSO (FLNIS-TPSO) was used in several predictive applications. Experimental results demonstrated that the proposed TPSO method converges quickly and yields a lower RMS error than other current methods.

Chen, Cheng-Hung; Liao, Yen-Yun

2014-04-01

418

NASA Astrophysics Data System (ADS)

The modeling of unsaturated groundwater flow is affected by a high degree of uncertainty related to both measurement and model errors. Geophysical methods such as Electrical Resistivity Tomography (ERT) can provide useful indirect information on the hydrological processes occurring in the vadose zone. In this paper, we propose and test an iterated particle filter method to solve the coupled hydrogeophysical inverse problem. We focus on an infiltration test monitored by time-lapse ERT and modeled using Richards equation. The goal is to identify hydrological model parameters from ERT electrical potential measurements. Traditional uncoupled inversion relies on the solution of two sequential inverse problems, the first one applied to the ERT measurements, the second one to Richards equation. This approach does not ensure an accurate quantitative description of the physical state, typically violating mass balance. To avoid one of these two inversions and incorporate in the process more physical simulation constraints, we cast the problem within the framework of a SIR (Sequential Importance Resampling) data assimilation approach that uses a Richards equation solver to model the hydrological dynamics and a forward ERT simulator combined with Archie's law to serve as measurement model. ERT observations are then used to update the state of the system as well as to estimate the model parameters and their posterior distribution. The limitations of the traditional sequential Bayesian approach are investigated and an innovative iterative approach is proposed to estimate the model parameters with high accuracy. The numerical properties of the developed algorithm are verified on both homogeneous and heterogeneous synthetic test cases based on a real-world field experiment.

Manoli, Gabriele; Rossi, Matteo; Pasetto, Damiano; Deiana, Rita; Ferraris, Stefano; Cassiani, Giorgio; Putti, Mario

2015-02-01

419

Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization.

Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature. PMID:25285268

Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee

2014-10-01

420

For bottom-up proteomics there are a wide variety of database searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection - referred to as STEPS - utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types.

Piehowski, Paul D.; Petyuk, Vladislav A.; Sandoval, John D.; Burnum, Kristin E.; Kiebel, Gary R.; Monroe, Matthew E.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

2013-03-01

421

The determination and optimization of (rutile) pigment particle size distributions

NASA Technical Reports Server (NTRS)

A light scattering particle size test which can be used with materials having a broad particle size distribution is described. This test is useful for pigments. The relation between the particle size distribution of a rutile pigment and its optical performance in a gray tint test at low pigment concentration is calculated and compared with experimental data.

Richards, L. W.

1972-01-01

422

Time-domain technique for optimal design of digital-filter equalizers.

NASA Technical Reports Server (NTRS)

A technique is presented for the design of frequency-sampling and transversal digital filters from specified unit-impulse responses. The multiplier coefficients for the digital filter are specified by the use of a linear-programming algorithm. Examples include the design of digital filters to generate intersymbol-free pulses for data transmission over ideal bandlimited channels and to equalize data transmission channels that have known unit-impulse responses.

Burlage, D. W.; Houts, R. C.

1972-01-01

423

NASA Astrophysics Data System (ADS)

In many applications it is desirable to have the maximum radiation of an array directed normal to the axis of the array. In this paper, the broadside radiation patterns of three-ring Concentric Circular Antenna Arrays (CCAA) with central element feeding have been reported. For each optimal synthesis, optimal current excitation weights and optimal radii of the rings are determined having the objective of maximum Sidelobe Level (SLL) reduction. The optimization technique adopted is Novel Particle Swarm Optimization (NPSO). Standard Particle Swarm Optimization (SPSO) is also employed for comparative optimization but it proves to be suboptimal. The extensive computational results show that the particular CCAA containing 4, 6 and 8 number of elements in three successive rings along with central element feeding yields grand minimum SLL (-56.58 dB) determined by NPSO.

Mandal, Durbadal; Ghoshal, Sakti Prasad; Bhattacharjee, Anup Kumar

424

NASA Astrophysics Data System (ADS)

A design of experiment (DOE) was implemented to show the effects of various point of use filters on the coat process. The DOE takes into account the filter media, pore size, and pumping means, such as dispense pressure, time, and spin speed. The coating was executed on a TEL Mark 8 coat track, with an IDI M450 pump, and PALL 16 stack Falcon filters. A KLA 2112 set at 0.69 ?m pixel size was used to scan the wafers to detect and identify the defects. The process found for DUV42P to maintain a low defect coating irrespective of the filter or pore size is a high start pressure, low end pressure, low dispense time, and high dispense speed. The IDI M450 pump has the capability to compensate for bubble type defects by venting the defects out of the filter before the defects are in the dispense line and the variable dispense rate allows the material in the dispense line to slow down at the end of dispense and not create microbubbles in the dispense line or tip. Also the differential pressure sensor will alarm if the pressure differential across the filter increases over a user-determined setpoint. The pleat design allows more surface area in the same footprint to reduce the differential pressure across the filter and transport defects to the vent tube. The correct low defect coating process will maximize the advantage of reducing filter pore size or changing the filter media.

Brakensiek, Nickolas L.; Kidd, Brian; Mesawich, Michael; Stevens, Don, Jr.; Gotlinsky, Barry

2003-06-01

425

The impact of filtering direct-feedthrough on the x-space theory of magnetic particle imaging

NASA Astrophysics Data System (ADS)

Magnetic particle imaging (MPI) is a new medical imaging modality that maps the instantaneous response of superparamagnetic particles under an applied magnetic field. In MPI, the excitation and detection of the nanoparticles occur simultaneously. Therefore, when a sinusoidal excitation field is applied to the system, the received signal spectrum contains both harmonics from the particles and a direct feedthrough signal from the source at the fundamental drive frequency. Removal of the induced feedthrough signal from the received signal requires significant filtering, which also removes part of the signal spectrum. In this paper, we present a method to investigate the impact of temporally filtering out individual lower order harmonics on the reconstructed x-space image. Analytic and simulation results show that the loss of particle signal at low frequency leads to a recoverable loss of low spatial frequency information in the x-space image. Initial experiments validate the findings and demonstrate the feasibility of the recovery of the lost signal. This builds on earlier work that discusses the ideal one-dimensional MPI system and harmonic decomposition of the MPI signal.

Lu, Kuan; Goodwill, Patrick; Zheng, Bo; Conolly, Steven

2011-03-01

426

Influence of CO2 observations on the optimized CO2 flux in an ensemble Kalman filter

NASA Astrophysics Data System (ADS)

In this study, the effect of CO2 observations on an analysis of surface CO2 flux was calculated using an influence matrix in the CarbonTracker, which is an inverse modeling system for estimating surface CO2 flux based on an ensemble Kalman filter. The influence matrix represents a sensitivity of the analysis to observations. The experimental period was from January 2000 to December 2009. The diagonal element of the influence matrix (i.e., analysis sensitivity) is globally 4.8% on average, which implies that the analysis extracts 4.8% of the information from the observations and 95.2% from the background each assimilation cycle. Because the surface CO2 flux in each week is optimized by 5 weeks of observations, the cumulative impact over 5 weeks is 19.1%, much greater than 4.8%. The analysis sensitivity is inversely proportional to the number of observations used in the assimilation, which is distinctly apparent in continuous observation categories with a sufficient number of observations. The time series of the globally averaged analysis sensitivities shows seasonal variations, with greater sensitivities in summer and lower sensitivities in winter, which is attributed to the surface CO2 flux uncertainty. The time-averaged analysis sensitivities in the Northern Hemisphere are greater than those in the tropics and the Southern Hemisphere. The trace of the influence matrix (i.e., information content) is a measure of the total information extracted from the observations. The information content indicates an imbalance between the observation coverage in North America and that in other regions. Approximately half of the total observational information is provided by continuous observations, mainly from North America, which indicates that continuous observations are the most informative and that comprehensive coverage of additional observations in other regions is necessary to estimate the surface CO2 flux in these areas as accurately as in North America.

Kim, J.; Kim, H. M.; Cho, C.-H.

2014-12-01

427

The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80mm, 1.19mm and 2.71mm, respectively. PMID:23664450

Donner, Ren; Menze, Bjoern H.; Bischof, Horst; Langs, Georg

2013-01-01

428

The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using three-dimensional computational fluid dynamics. Three-dimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steady-state flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.

Wang, S L; Singer, M A

2009-07-13

429

Modifying material properties provides another approach to optimize coated particle fuel used in pebble bed reactors. In this study, the MIT fuel performance model (TIMCOAT) was applied after benchmarking against the ...

Soontrapa, Chaiyod

2005-01-01

430

Optimal conditions for the screening of cervical scrapes for human papillomavirus (HPV) were investigated by using filter in situ hybridization. Since integrated and episomal HPV can be found, cell lines containing viral DNA in an integrated form (HPV in CaSki) or in an episomal state (BK virus-induced hamster tumor cells) were used for optimization experiments. An increase in sensitivity was achieved by alkaline denaturation and neutralization before the specimens were spotted onto the membrane. This increase was 5-fold for the episomal virus and 16-fold for the integrated virus in the model system, as compared with other methods. To evaluate this method on clinical material, 1,963 cervical scrapes were screened for the presence of HPV 6/11 and HPV 16. Nineteen scrapes were positive for HPV 6/11 or HPV 16; and in 1,810 scrapes, no HPV 6/11 or HPV 16 could be detected by the modified filter in situ hybridization technique. Scrapes from which the interpretation of the modified filter in situ hybridization results were equivocal (n = 71, 3.6%) or in which positivity was detected for both HPV 6/11 and HPV 16 (n = 63, 3.2%) were further analyzed by the DNA dot spot technique. Eight scrapes with an equivocal result and only one scrape showing a double positivity by the modified filter in situ hybridization technique could be confirmed in the dot spot assay. In the total group 12 scrapes were positive for HPV 6/11 DNA, 15 were positive for HPV 16 DNA, and 1 was positive for both HPV 6/11 and HPV 16 DNA. Southern blot analysis on modified filter in situ hybridization-positive and -negative scrapes revealed a 100% correlation. Images PMID:2536384

Melchers, W J; Herbrink, P; Walboomers, J M; Meijer, C J; vd Drift, H; Lindeman, J; Quint, W G

1989-01-01

431

Optimal conditions for the screening of cervical scrapes for human papillomavirus (HPV) were investigated by using filter in situ hybridization. Since integrated and episomal HPV can be found, cell lines containing viral DNA in an integrated form (HPV in CaSki) or in an episomal state (BK virus-induced hamster tumor cells) were used for optimization experiments. An increase in sensitivity was achieved by alkaline denaturation and neutralization before the specimens were spotted onto the membrane. This increase was 5-fold for the episomal virus and 16-fold for the integrated virus in the model system, as compared with other methods. To evaluate this method on clinical material, 1,963 cervical scrapes were screened for the presence of HPV 6/11 and HPV 16. Nineteen scrapes were positive for HPV 6/11 or HPV 16; and in 1,810 scrapes, no HPV 6/11 or HPV 16 could be detected by the modified filter in situ hybridization technique. Scrapes from which the interpretation of the modified filter in situ hybridization results were equivocal (n = 71, 3.6%) or in which positivity was detected for both HPV 6/11 and HPV 16 (n = 63, 3.2%) were further analyzed by the DNA dot spot technique. Eight scrapes with an equivocal result and only one scrape showing a double positivity by the modified filter in situ hybridization technique could be confirmed in the dot spot assay. In the total group 12 scrapes were positive for HPV 6/11 DNA, 15 were positive for HPV 16 DNA, and 1 was positive for both HPV 6/11 and HPV 16 DNA. Southern blot analysis on modified filter in situ hybridization-positive and -negative scrapes revealed a 100% correlation. PMID:2536384

Melchers, W J; Herbrink, P; Walboomers, J M; Meijer, C J; vd Drift, H; Lindeman, J; Quint, W G

1989-01-01

432

Optimal filter design approaches to statistical process control for autocorrelated processes

reduces to a third-order filter on x t without a prefilter. The ARMA(1,1) chart of Jiang et al. (2000) is a first-order filter on x t with no prefilter. With the whitening prefilter, therefore, the dynamic structure of control charts can...

Chin, Chang-Ho

2005-11-01

433

Full wave optimization of stripline tapped-in ridge waveguide bandpass filters

A bandpass ridge waveguide filter, with input\\/output realized through tapped-in stripline is designed. Using rigorous mode matching technique the generalized scattering matrices of all the building blocks can be obtained. Design procedure is described and examples are given to demonstrate the features of the proposed coupling structure. The proposed structure shows a considerable reduction of the filter's total length

M. A. El Sabbagh; Heng-Tung Hsu; K. A. Zaki; P. Pramanick; T. Dolan

2002-01-01

434

NASA Astrophysics Data System (ADS)

We propose a novel notch-filtering scheme for bit-rate transparent all-optical NRZ-to-PRZ format conversion. The scheme is based on a two-degree-of-freedom optimally designed fiber Bragg grating. It is shown that a notch filter optimized for any specific operating bit rate can be used to realize high-Q-factor format conversion over a wide bit rate range without requiring any tuning.

Cao, Hui; Shu, Xuewen; Atai, Javid; Zuo, Jun; Xiong, Bangyun; Shen, Fangcheng; Liu, xin; Cheng, Jianqun

2015-02-01

435

Optimized qualification protocol on particle cleanliness for EUV mask infrastructure

NASA Astrophysics Data System (ADS)

With the market introduction of the NXE:3100, Extreme Ultra Violet Lithography (EUVL) enters a new stage. Now infrastructure in the wafer fabs must be prepared for new processes and new materials. Especially the infrastructure for masks poses a challenge. Because of the absence of a pellicle reticle front sides are exceptionally vulnerable to particles. It was also shown that particles on the backside of a reticle may cause tool down time. These effects set extreme requirements to the cleanliness level of the fab infrastructure for EUV masks. The cost of EUV masks justifies the use of equipment that is qualified on particle cleanliness. Until now equipment qualification on particle cleanliness have not been carried out with statistically based qualification procedures. Since we are dealing with extreme clean equipment the number of observed particles is expected to be very low. These